Day 7 – Parsing Firefox’ user.js with Raku

One of the simplest way to properly configure Firefox, and make the configurations syncable between devices without the need of 3rd party services, is through the user.js file in your Firefox profile. This is a simple JavaScript file that generally contains a list of user_pref function calls. Today, I’ll be showing you how to use the Raku programming language’s Grammars to parse the content of a user.js file. Tomorrow, I’ll be expanding on the basis created here, to allow people to programmatically interact with the user.js file.

The format

Let’s take a look at the format of the file first. As an example, let’s use the startup page configuration setting from my own user.js.

user_pref("browser.startup.homepage", "");

Looking at it, we can deconstruct one line into the following elements:

  • Function name: in our case this will almost always be the string user_pref;
  • Opening bracket;
  • List of arguments, seperated by ,
  • Closing bracket;
  • A ; ending the statement.

We can also see that string arguments are enclosed in ". Integers, booleans and null values aren’t quoted in JavaScript, so that’s something we need to take into account as well. But let’s set those aside for now, and first get the example line parsed.

Setting up the testing grounds

I find one of the easiest ways to get started with writing a Grammar is to just write a small Raku script that I can execute to see if things are working, and then extend the Grammar step by step. The starting situation would look like this.

grammar UserJS {
  rule TOP { .* }

sub MAIN () {
  my @inputs = ('user_pref("browser.startup.homepage", "");');

  for @inputs {
    say UserJS.parse($_);

Running this script should yield a single Match object containing the full test string.

「user_pref("browser.startup.homepage", "");」

The and markers indicate that we have a Match object, which in this case signifies that the Grammar parsed the input correctly. This is because the placeholder .* that we’re starting out with. Our next steps will be to add rules in front of the .* until that particular bit doesn’t match anything anymore, and we have defined explicit rules for all parts of the user.js file.

Adding the first rule

Since the example starts with the static string user_pref, let’s start on matching that with the Grammar. Since this is the name of the function, we’ll add a rule named function-name to the grammar, which just has to match a static string.

rule function-name {

Next, this rule needs to be incorporated with the TOP rule, so it will actually be used. Rules are whitespace insensitive, so you can re-order the TOP rule to put all elements we’re looking for one after another. This will make it more readable in the long run, as more things will be tacked on as we continue.

rule TOP {

Running the script now will yield a little more output than before.

「user_pref("browser.startup.homepage", "");」
 function-name => 「user_pref」

The first line is still the same, which is the full match. It’s still matching everything, which is good. If it didn’t, the match would fail and it would return a Nil. This is why we keep the .* at the end.

There’s an extra line this time, though. This line shows the function-name rule having a match, and the match being user_pref. This is in line with our expectations, as we told it to match that literal, exact string.

Parsing the argument list

The next part to match is the argument list, which consists of an opening bracket, a closing bracket to match and a number of arguments in between them. Let’s make another rule to parse this part. It may be a bit naive for now, we will improve on this later.

rule argument-list {

Of course, the TOP rule will need to be expanded to include this as well.

rule TOP {

Running the script will yield another line, indicating that the argument-list rule matches the entire argument list.

「user_pref("browser.startup.homepage", "");」
 function-name => 「user_pref」
 argument-list => 「("browser.startup.homepage", "")」

Now that we know this basic rule works, we can try to improve it to be more accurate. It would be more convenient if we could get a list of arguments out of it, and not include the brackets. Removing the brackets is the easier part, so let’s do that first. You can use the <( and )> markers to indicate where the result of the match should start and end respectively.

rule argument-list {
  <( .+ )>

You can see that the output of the script now doesn’t show the brackets on the argument-list match. Now, to make a list of the arguments, it would be easiest to create an additional rule to match a single argument, and match the , as a seperator for the arguments. We can use the % operator for this.

rule argument-list {
  <( <argument>+ % ',' )>

rule argument {

However, when you try to run this, all you’ll see is a Nil as output.

Debugging a grammar

Grammars are quite a hassle to debug without any tools, so I would not recommend trying that. Instead, let’s use a module that makes this much easier: Grammar::Tracer. This will show information on how the Grammar is matching all the stuff. If you use Rakudo Star, you already have this module installed. Otherwise, you may need to install it.

zef install Grammar::Tracer

Now you can use it in the script by adding use Grammar::Tracer at the top of the script, before the grammar declaration. Running the script now will yield some content before you see the Nil.

| function-name
| * MATCH "user_pref"
| argument-list
| | argument
| | * MATCH "\"browser.startup.homepage\", \"\");"
| * FAIL

Looking at this, you can see that an argument is being matched, but it’s being too greedy. It matches all characters up until the end of the line, so the argument-list can’t match the closing bracket anymore. To fix this, we must update the argument rule to be less greedy. For now, we’re just matching strings that appear within double quotes, so let’s change the rule to more accurately match that.

rule argument {
  <( <-["]>+? )>

This rule matches a starting ", then any character that is *not* a ", then another ". There’s also <( and )> in use again to make the surrounding " not end up in the result. If you run the script again, you will see that the argument-list contains two argument matches.

「user_pref("browser.startup.homepage", "");」
 function-name => 「user_pref」
 argument-list => 「"browser.startup.homepage", ""」
  argument => 「browser.startup.homepage」
  argument => 「」

I’m ignoring the output of Grammar::Tracer for now, since there’s no problems arising. I would generally suggest just leaving in there until you’re completely satisfied with your Grammars, so you can immediately see what’s going wrong where during development.

The statement’s end

Now all there’s left to explicitly match in the TOP rule, is the statement terminator, ;. This can replace the .*, since it’s the last character of the string.

rule TOP {    

The final Grammar should look like this.

grammar UserJS {
  rule TOP {

  rule function-name { 

  rule argument-list {
     <( <argument+ % ',' )>

  rule argument {    
       <( <-["]> )>

Now, the problem here is that it’s still quite naïve. It won’t deal with double quotes inside strings, not with Boolean values or integers. The current Grammar is also not capable of matching multiple lines. All of these problems can be solved, some easier than others. Come back here tomorrow to learn how!

Day 6 – Put some (GitHub) Actions in your Raku (repositories)

After being in beta for quite some time, GitHub actions were finally introduced to the general public in November 2019. They have very soon become ubiquitous, over all combined with the other release that were recently made by GitHub, the package (and container) registry.

We can put them to good use with our Raku modules. Well see how.

We could use some action

An action is a script that is triggered by an event in your repository. In principle, anything you or a program does when interacting with a repository could trigger an action. Of course, this includes git actions, which include basically pushing to the repository, but also all kinds of things happening in the repository, from changes in the wiki to adding a review to a pull request.

And what kind of things can you do? GitHub creates a container with some basic toolchains, as well as language interpreters and compilers of your choice. At the very basic level, what you have is a container where you can run a script triggered by an event.

GitHub actions reside in a YAML file places within the .github/workflows directory in your repository. Let’s go for our first one:

name: "Merry Christmas"
on: [push]
runs_on: ubuntu-latest
- name: Merry Xmas!
run: echo Merry Xmas!
view raw hola.yaml hosted with ❤ by GitHub

This script is as simple as it gets. It contains a single job, with a single step. Let’s go little by little:

  • We give it a name, “Merry Christmas”. That name will show up in your list of actions
  • on is the list of events that will trigger this action. We will just list a single event.
  • jobs is an array that will include the list of jobs that will be run sequentially.
  • Every job will have its own key in the, which will be used to refer to it (and also to store variables, more on this later), and can run in its own environment, which you have to select. We’ll take ubuntu-latest, which is a Bionic box, but there are other to choose from (more on this later).
  • A job has a series of steps, every one with a name and then a sequence of commands. run will run on whatever environment is defined in that specific step; in this case, a simple shell script that prints Merry Xmas!

Since we’ve instructed via the on command to run every time there’s a push to the repository, the tab Actions will show the result of running it, just like this. If nothing goes wrong, and how could it, since it’s simply a script, it will show green check marks and produce the result:

Merry Xmas from a GitHub Action

These steps form a kind of pipeline, and every step can produce an output or change the environment that is going to be used in the next step; that means that you can create pipe actions that just process input and produce something for an output, like this one

name: "One step up"
on: [push]
runs-on: ubuntu-latest
- name: Pre-Merry Xmas!
greeting: "Merry"
season: "Xmas"
run: |
sentence="$greeting $season!"
echo ::set-env name=SENTENCE::$sentence
- name: Greet
id: greet
run: |
output=$(python -c "import os; print(os.environ['SENTENCE'])")
echo ::set-output name=printd::$output
- name: Run Ruby
OUTPUT: ${{ steps.greet.outputs.printd }}
run: /opt/hostedtoolcache/Ruby/2.6.3/x64/bin/ruby -e "puts ENV['OUTPUT']"
view raw using-steps.yaml hosted with ❤ by GitHub

The first step in this action, code-named “Pre-Merry Xmas!”, declares a couple off environment variables via env. We will collate them in a single sentence. But here comes the gist of it: GitHub Actions use meta-sentences, preceded with ::, that are printed to output and interpreted as commands for the next step. In this case, ::set-env sets an environment variable.

The next step showcases the use of Python, which is another default tool in this environment; as a matter of fact, it’s included in every environment out there, together with Node; you can use it in its default version or set the version as an action variable. This step also uses a similar mechanism to set, instead of an environment variable, an output that can be used by the next step.

Unlike Python, Ruby does not have a default version available in the path; however, it’s only a matter of finding the path to it and you can use it, like here. This step also uses the output of the previous step; GHAs have contexts, in this case a step context, which can be used to access the output of previous steps. steps.greet.outputs.printd access the context of the step whose id is greet (which we declared via the id key there), and since we declared the output to be called printd, outputs.printd will retrieve the output by that name. Contexts are not available from within the action environment, which is why we need to assign it first to an environment variable. Output will look like this, and it will use green check marks, as well as reveal the output in the raw log and if you click on the step name.

If you are a long-term Perl use like I am, you will miss that. Ruby, Python, Node, popular languages, fair enough. But Perl is in the base Ubuntu 16.04 install. Even if we can use that environment, it seems to have been eliminated from there. Where do we have to go to use Perl? To the Windows environments. Let’s use it to create a polite bot that greets you when you create or edit an issue:

name: "We 🎔 Perl"
types: [opened, edited, milestoned]
runs-on: windows-latest
- name: Checkout
uses: actions/checkout@v2-beta
- name: Maybe greet
id: maybe-greet
HEY: "Hey you!"
GREETING: "Merry Xmas to you too!"
BODY: ${{ github.event.issue.body }}
TOKEN: ${{ secrets.GITHUB_TOKEN}}
ISSUE: ${{ github.event.issue.number }}
run: |
$output=(perl utils/
$body= @{ 'body'= $output } | ConvertTo-Json
$header= @{
'Authorization'="token $ENV:TOKEN"
Invoke-RestMethod -Uri "$ENV:ISSUE/comments&quot; -Method 'Post' -Body $body -Headers $header

Check out first the on command, that is set to be fired every time an issue is created, edited or assigned a milestone, an action that, for some reason, is called being milestoned.

This lawn has been milestoned

The main difference you see above is the presence of the windows-latest as the environment this action will be run on. But next we see another nice things of actions: they can be simply published in GitHub, and can be reused. This checkout action does what it says: checks out the repo code, which is not available by default. We are not really going to run any check on the code, but we need the little Perl script we’ve created. More on this later.

The next step is the one that actually will operate when an issue is created, changed or, wait for it, milestoned. We declare two different environment variables: one will be used to comment on issues that don’t mention “Merry”, the other if they do. But the nice thing comes next: we can work with the issue body, which is available as a context variable: github.event.issue.body. The next variable is the magic key that opens the door to the GitHub API. No need to upload it or anything, it will be there ready for you, and GitHub will keep track of it and hide it wherever it appears. We will also need the issue number to comment on it, and we store it in the $ISSUE variable.

Let’s next run the action. We will use the fantastic Perl regexes to check for the presence of the word Merry in the body, using this mini-script:

print( ( ($ENV{BODY} =~ /Merry/) == 1)? $ENV{GREETING} : $ENV{HEY});

The next few PowerShell commands are, by far, the most difficult part of this article.

We run the script so that we capture, and store, the result in a variable. And the next commands create PowerShell hashes, and $body is converted to JSON. By using Invoke-RestMethod we use GitHub API to create a comment with the greetings in the issue that was milestoned or any or the other stuff.

Issue commented and milestoned

As the image above shows, couple of comments: one when it was created and the other, well, check the image.

However, last time we checked this was a Raku Advent Calendar, right? We want our Raku!

Using Raku in GitHub actions

Last time I checked, Raku was not among the very limited number of languages that are available in any of the environments. However, that does not mean we cannot use it. Actions can be upgraded with anything that can be installed, in the case of Windows using Chocolatey (or downloading it via curl or any other command). We’ll also use it to run a real test. Dummy, but real. All actions actually either succeed or fail; you can use that for carrying out tests. Check out this action:

name: "We 🎔 Raku"
on: [push, pull_request]
runs-on: windows-latest
- name: Checkout
uses: actions/checkout@v2-beta
- name: Install and update
run: |
cinst rakudostar
$env:Path = "C:\rakudo\bin;C:\rakudo\share\perl6\site\bin;$env:Path"
zef test .

Which is testing using this script:

#!/usr/bin/env perl6
use v6;
use Test;
constant $greeting = "Merry Xmas!";
constant $non-greeting = "Hey!";
is( greet( "Hey", $greeting, $non-greeting), $non-greeting, "Non-seasonal salutation OK");
is( greet( "Merry Xmas!", $greeting, $non-greeting), $greeting, "Seasonal salutation OK");
sub greet( $body, $greeting, $non-greeting ) {
($body ~~ /M|m "erry"/)??$greeting!!$non-greeting;
view raw 00-regex.p6 hosted with ❤ by GitHub

The regex here uses the Raku syntax to perform more or less the same thing that the previous Perl script did, but let’s focus on the action above. It runs three PowerShell commands, one of them using Chocolatey to install Rakudo Star, and then set the command path and refresh it so that it can be used in the last command, the usual zef test . that actually runs the tests.

Rakudo Star has not been updated since March; a new update is coming very soon, but meanwhile, the combination Windows/GitHub Actions/Rakudo is not really the best way to go, since the bundled zef version is broken and can’t be updated from within a GitHub action.

This test takes quite a while; you have to download and install Raku every single time, plus it does not work if you need to install any additional module. Fortunately, there are many more ways to do it. Meet the Raku container.

Using dockerized actions

GitHub actions can be created in two different environments. One of them is called node12, and can actually run any operating system, the other is docker, which is Linux exclusive.

These containers will be built on the run and then executed, with commands executed directly inside the container. By default, the ENTRYPOINT of the container will be run, as usual. Previously, we have used actions/checkout for checking out the repository; these official actions can be complemented with our own; in this case, we will use the Raku container action which you can also check out in the Actions markecplace.

This action basically contains a Dockerfile, this one:

FROM jjmerelo/alpine-perl6:latest
LABEL version="4.0.2" maintainer="JJ Merelo <>"
# Set up dirs
ENV PATH="/root/.rakudobrew/versions/moar-2019.11/install/bin:/root/.rakudobrew/versions/moar-2019.11/install/share/perl6/site/bin:/root/.rakudobrew/bin:${PATH}"
RUN mkdir /test
VOLUME /test
# Will run this
ENTRYPOINT raku -v && zef install --deps-only . && zef test .
view raw raku-test.Dockerfile hosted with ❤ by GitHub

This Dockerfile does little more than establish the system PATH and an entry point that can be used for testing. It does not have anything that is Action-specific.

It uses the very basic Alpine Raku container, which is the basis for a whole series of Raku testing containers.

But again, let’s go back to where the action is, that is, er, the action.

name: "We 🎔 Ubuntu, Docker and Raku"
on: [push, pull_request]
runs-on: ubuntu-latest
name: AdvenTest
- name: Checkout
uses: actions/checkout@v1
- name: Runs tests
uses: JJ/raku-container-action@master

Sweet and simple, right?

Yes, I couldn’t help but call the test for the Advent Calendar AdvenTest.

It checks out the repository using the official checkouting action, and then runs the test, which is the default command in the Dockerfile that is created in that action. It would also install ecosystem dependencies, if there were any.

How long does this one take? Just short of 30 seconds, or one quarter of what the other one took.

Tell me more!

GitHub actions are a world of possibilities (and occasionally, also a world of pain). Containerized tools mean that you will be able to work on the repository and the world at large using your favorite language, that is, Raku, starting actions from any kind of events, interactive or periodical; for instance, you could schedule tests every week, or start deployments when tests have been cleared.

If you liked CI tools such as Travis or CircleCI, you will love GitHub actions. Put them to good use in your Raku repositories.

Day 5 – Modular Raku Command Line Apps

Modular Raku Command Line Apps

It was a three weeks before Christmas and Santa’s workshop was a mess. Elves were running around trying to get everything ready and it didn’t look like anything would be.

As soon as Santa walked in he was surrounded by a horde of unhappy elves all complaining at once.

“The system is too slow!” one piped up “It takes a second just to print out a help file.”

“And the help file is wrong! It’s out of date.” said a second.

“I just found the command to add move a child from the Naughty to Nice would except something else and die silently!!!” a third and extremely harrased voice, belonging to an elf with a very long list broke through.

Santa was worried, he’d spent much of the year working with a lot of the Dev-elfs to update their older systems to use Raku and they thought everything was going swimingly. All the different modules worked really well in testing and the single command to manage the various system was the only section he’d not really been involved in.

He looked around for Snowneth the Elf in charge of that project for some answers but couldn’t see him. A quick set of questions and Santa found out Snowneth had come down with Elf flu a few months ago and was still of sick.

Santa kicked himself for not keeping up with things, but there was so much to do, he made a note to pop in and check on Snowneth later. Now to find out what had gone wrong and why no one had told him!

He opened up his laptop and tried out the main system service in test mode :

helper -?

A couple of seconds later the system gave him a command line prompt… nothing else.

helper -h

This time he got a documentation page, a quick scan of it and he could see a number of the commands were documented incorrectly and there were at least two more recent ones missing. Fearing what he would see he opened up the git log, all the commits had been done by a number of names he recognised. The junior Dev-elfs who had been assigned to Snowneth’s team.

He closed his laptop and went to find a coffee machine.

A few hours, and coffees, later, Santa and a number of young worried looking elves were huddled in a small office. Santa had his laptop open and the code for the service script was open in front of them, some of the 1000 lines of it.

By now Santa and found out what had happened, Snoweth had gone sick and everyone had thought someone else was going to appoint a new team lead. Meanwhile the juniors did their best to get the job done. Santa took a moment to get his breath and compose his voice.

“Firstly I have to apologise. I’m sure you’ve all be under a lot of stress and you’ve all done your best to get this vital work done.”

The juniors perked up, by now they’d all got the message that their work was causing the workshop to run slow and maybe even cancel Christmas! They had been expecting to be shouted at, maybe even sent to work mucking out the reindeer.
The traditional job for an elf who had messed up.

“We can get into what happened later, for now we need to fix this code as fast as possible and get things unblocked. I’m hoping you can all help me to get this done.”

The juniors by now where nodding and smiling, Santa was happy, he should have been on top of this mess and he hoped by getting them involved in the clean up they could regain some confidence.

“Right, so what are the problems we’ve got?”

Everyone spoke up and they quickly whiteboarded a list :

  • Slow to start for any command
  • Some commands have incorrect input validation
  • Out of Date documentation

There were a few other things but there three seemed to be the main ones.

“Ok. So slow startup. I think this is obvious right?”

All the juniors looked at him, then one raised his hand.

“Sniffy isn’t it? What do you think? And no need to raise your hand, just speak up… In turn try not to shout over each other.”

“Is it because the script is so long? I think I read that a Raku script is compiled when you call it? If we make the script shorter it will run faster?”

“That’s pretty much it yes, though less complex would be maybe a better goal. We’ve got a lot of if clauses and the like. The important thing to remember is Module code is precompiled but the main script isn’t. So wherever possible we should be using modules.”

The junior nodded their heads. Then one spoke up.

“But we are using the Modules, they are all there at the top of the script but then we have to work out what command people are calling then work out what arguments they are passing. Then…” Santa held up his hand.

“I can see that Wibbly. I think that’s where we should start.” He pointed to a section of code on the screen. “Who can tell me what’s wrong with this.”

use Workshop::ListController::Nice;

my $command = @*ARGV[0];

if ( $command ~~ 'list' ) {
    my $list-command = @*ARGV[1];
    if ( $list-command ~~ 'add' ) {
        if ( @*ARGV[2] ~~ 'nice' ) {
           Workshop::ListController::Nice.add-child( @*ARGV[3] ); 

And so on. The juniors looked at it and talked among themselves. Sniffy spoke up again.

“It’s a bit complicated, it didn’t start that way but as we added commands it got bigger. Also we’re not checking the childs name is valid”

“That’s true but what I wondered was why are you reading @*ARGS?”

There was a look of confusion on their faces.

“Why don’t we have a MAIN sub routine?”

Still confusion.

“Ok. You all go look that up. I’lll whip up an example.”

He quickly typed while the elves went to the Raku Docs site and started searching. As he heard their exclamations raise then slowly quieten he’d turned away from the keyboard.

use Workshop::ListController::Nice;

multi sub MAIN( "list", "add", "nice", ValidChildName $child ) { Workshop::ListController::Nice.add-child( @*ARGV[3] ); }
multi sub MAIN( "list", "add", "nice", Str $invalid-child ) { die "Invalid childname {$invalid-child}"; }

There was a round of applause from the juniors.

“So we can use the Raku multi dispatch with MAIN to create all our commands down to a quite granular level and also have type checking in place. I think this is a good place to start. I’d like us to get to the point where all we have in this script is MAIN off we go.”

The next day and things were looking better, the script was faster to start and the input validation issues had been resolved. Everyone was feeling a lot better. Santa had even got the time to see Snoweth and make sure he was OK.

“Ok… Now lets look at this.”

He pointed at the largest subroutine left in the file, it started like this.

multi sub MAIN( :$h where so * ) {
    say "Usage:";
    say "  helper list add nice [child] : Adds a child to the Nice list."
    say "  helper stable rota list : Lists the current rota for cleaning the stables."

One of the elves jumped to her feet, eyes sparkling.

“Gimlina I believe you have some thoughts on this.”

“We should use declarator blocks!” Santa smiled, he’d had to hold back the young elf’s enthusiasm yesterday in order to keep the team focused on the task in hand. He smiled and nodded.

“Carry on.”

“If we add declarator blocks to our subroutines and arguments we get pregenerated documentaion for free. And it gets updated whenever the code changes.”

“Can you give a demonstration?”

She smiled and brought up some code.

#| Add a child to the nice list
multi sub MAIN( "list", "add", "nice", 
    ValidChildName $child #= Child name to add
) { 
    Workshop::ListController::Nice.add-child( @*ARGV[3] ); 

multi sub MAIN( "list", "add", "nice", Str $invalid-child ) is hidden-from-USAGE { die "Invalid childname {$invalid-child}"; }

“So the #| does what?”

“Attaches the block to whatever comes after it. #= attaches the block to the preceding item.”

“And is hidden-from-USAGE?”

“Well when you call the script and it doesn’t know what to do, or you pass -? it calls the USAGE function and that displays the $*USAGE string. Which is generated from the declarator blocks. But some subs you don’t want to display, so you can hide them.”

She quickly typed

helper -?
  helper list add nice [child] -- Add a child to the nice list
        Child name to add 

Santa nodded and the rest of the juniors burst into enthusiastic cheers. Gimlina looked happy as they all turned to adding documentation to the helper script.

Things were looking better, the workshop was sounding a lot happier and the documentation was going well. Santa was sitting in with the juniors just checking final things over when he saw a merge request in one of the module repositories. He looked confused, they’d been in code freeze for a week now and only urgent bug fixes should be raised and he’d not heard of any issues beyond the helper script. He cursed himself again and thought that maybe he did need some product manager elves.

When he opened the merge request his eyes widened, he turned to one of the quieter juniors, they were very smart but tended to not speak up, just kept their head down and worked on things.

“Erry? What’s the MR in the Workshop::ListController::Nice module?”

As he asked, and waited on them to come over he took a look at the code, his eyes now so wide they looked like they would bulge out of his head.

#| Add a child to the nice list
multi sub MAIN is export(:MAIN) ( "list", "add", "nice", 
    ValidChildName $child #= Child name to add
) { 
    Workshop::ListController::Nice.add-child( @*ARGV[3] ); 

multi sub MAIN( "list", "add", "nice", Str $invalid-child ) is hidden-from-USAGE is export(:MAIN) { die "Invalid childname {$invalid-child}"; }

“Well I was thinking sir” the elf, in a suprisingly loud voice, said from next to him. “You said the module code is precompiled. So if we moved our MAIN subs into the modules they’d be precompiled.”

“And the module teams could manage their own command line interfaces and documentation!” Santa exclaimed happily, Erry nodded smiling.

The rest of the day was a riot of work as all the other dev-elfs (who had been looking forward to a nice month long code freeze before the post holiday backlog fighting) got pulled in and told the new plan.

By the end of the day the helper script was a long line of module use statements and one solitary function.

use Workshop::ListController::Nice :MAIN;

#| Display the help 
multi sub MAIN( :h($help) where so * ) {
    say $*USAGE;

Santa looked at it and at Snowneth who had finally made it back to the workshop that afternoon. He shrugged.

“I didn’t know about -? and I have got used to used to -h. Can’t hurt right?”

Santa nodded and they went off to help work in the workshop.

Day 4 – Not tripping over tripcodes

Greetings. Today we are going to look at an implementation of tripcodes, a kind of hashing used for signing posts anonymously on the Internet.

There are different algorithms to do so, but one that we are interested in is one generating non-secure, old-fashioned tripcodes.

So what is it?

Say there is a website allowing to leave comments while staying anonymous. No registration, no login, no usernames.

You respond to a post and then a person is responding to your response. You start a conversation. You know that your posts are yours. But what about all the other users? Are you still talking to the same person or some bunch of kids around playing their tricks on you? No idea! To resolve that sort of confusion in some situations, a tripcode can be used.

The idea is simple: along with your post you can pass your wanted nickname and a password. The website takes a password and hashes it into a tripcode. On displaying posts, the tripcodes are attached to messages, so you can make sure this is the same person who knows the password. Of course, nobody demands people to claim authorship of their posts, but we are leaving that aside, as we are interested in an implementation.

Examples at hands

Implementing the algorithm takes a single subroutine. Yet we need a way to test our tripcodes. Let’s define some testing rules for your tripcode subroutine:

use Test;

is tripcode('a'), '!ZnBI2EKkq.';
is tripcode('¥'), '!9xUxYS2dlM';
is tripcode('\\'), '!9xUxYS2dlM';
is tripcode('»'), '!cPUZU5OGFs';
is tripcode('?'), '!cPUZU5OGFs';
is tripcode('&'), '!MhCJJ7GVT.';
is tripcode('&amp;'), '!QfHq1EEpoQ';
is tripcode('!@#heheh'), '!eW4OEFBKDU';

Raku code

Now let’s take a look at the algorithm:

  • Escape HTML characters
  • Convert all characters to CP932 encoding. For characters where it is not possible, use ? symbol
  • Decode resulting bytes as UTF8
  • Generate salt for our hash. To do this, add H. string to our decoded string (as it might be empty!), take second and third characters. Next, substitute any “weird” characters (in ASCII terms, anything that has code below 46 (.) and above 122 (z)) with a dot.
  • Translate some non-word characters (:;<=>?@[\\]^_`) into ABCDEFGabcdef.
  • Use UNIX function crypt with the decoded string and a salt we got, and took last 10 characters of it.
  • That’s all!

There are quite a lot of steps, but let’s see how we can code such a task in Raku.
Let’s start with a sub declaration:

sub tripcode($pass is copy) {


We are going to modify the $str variable in-place, so is copy trait of the parameter will help us against the “passed Str value is immutable” error.

Next, escape HTML:

sub tripcode($pass is copy) {
    $pass .= trans(['&', '<', '>'] => ['&amp;', '&lt;', '&gt;']);

With the trans method, we can replace characters in a string using “left to right” correspondence, so & is replaced with &amp;, < is replaced with &lt; etc.

Next thing – dances with Windows 932.

$pass .= trans(['&', '<', '>'] => ['&amp;', '&lt;', '&gt;']);
$pass = ([~] ${ (try .encode('windows-932')) // })).decode;

Let’s imagine writing this line step by step:

# split $pass into single characters
# for every character in the list resultring from `comb` method call
${  })
# try to encode it into the encoding we want
${ try .encode('windows-932') })
# when `try` returned `Nil`, use `//` operator which means `If the left side is not defined,`use the right side
${ ((try .encode('windows-932')) // })
# Use [~] unary metaoperator, which is a shortcut for "join this array using this operator to join two single elements"
([~] ${ (try .encode('windows-932')) // }))
# At last, decode the resulting buffer and assign it to the variable
$pass = ([~] ${ (try .encode('windows-932')) // })).decode;

Now we need to generate some salt for our hash.

my $salt = "{$pass}H.".substr(1, 2).subst(/<-[. .. z]>/, '.').trans(':;<=>?@[\\]^_`' => 'ABCDEFGabcdef');

Firstly, we add H. part to the password, then taking second and first characters using substr call. Note the second call is subst, which replaces anything outside of regex range with a dot. Here, substr is a short for substring, while subst is a short for substitute. Then goes our trans method.

As the next thing, we need to call UNIX crypt function. Luckily, we don’t need to implement it! In Raku’s ecosystem there is already a module Crypt::Libcrypt written by Jonathan Stove++. Let’s install it:

zef install Crypt::Libcrypt

Now we can import this module and have crypt subroutine at our service. The last line is simple:

'!' ~ crypt($pass, $salt).substr(*-10, 10);

We don’t need to write an explicit return statement, as the last statement of a block is considered to be its return value. A call to crypt subroutine and our old friend substr with the first argument looking funny. The second argument is, as usual, a number of characters we want, while the first one is an expression with Whatever Star used. On call, the substr caller’s length is passed into this micro-block of code, so it is translated into 'foo'.substr('foo'.chars() - 10, 10) (but smarter inside).

Comprising everything, we get a full definition:

sub tripcode($pass is copy) {
    $pass .= trans(['&', '<', '>'] => ['&amp;', '&lt;', '&gt;']);
    $pass = ([~] ${ (try .encode('windows-932')) // })).decode;
    my $salt = "{$pass}H.".substr(1, 2).subst(/<-[. .. z]>/, '.').trans(':;<=>?@[\\]^_`' => 'ABCDEFGabcdef');
    '!' ~ crypt($pass, $salt).substr(*-10, 10);

Check it:

> perl6 tripcode.p6
ok 1 - 
ok 2 - 
ok 3 - 
ok 4 - 
ok 5 - 
ok 6 - 
ok 7 - 
ok 8 -

A success, all the checks we prepared pass! As we successfully implemented the algorithm using only four lines of code, it is time to refill some hot drink. Have a nice day!

Day 3 – Stack Frame Reduction

Stack Frame Reduction

What is a Stack Frame?


For those not familiar with the stack, it is a bit of memory for your program to use. It is fast but limited.
Whenever you call a procedure (function, method,… naming is a complicated thing) your program gets a bit of storage on the stack, which we call a frame.
The stack frame gets used for storing parameters, local variables, temporary storage, and some information about the calling context.
This means that if you have a recursive procedure call your program keeps asking for stack frames until you eventually return a value and the memory is freed up.

A quick and simple example:

Let us take the standard example of a basic recursive sorting algorithm:

sub factorial (Int $n --> Int) {
        $n == 0 ?? 1 !! $n * factorial($n - 1)

This is a very simple example of recursion, and usually we don’t have to worry about stack frame buildup in this code. That said, this is a good starting point for showing how to reduce the buildup.

GOTO reduction:

Didn’t Larry start with Basic?

This way of reducing stack frame buildup should be familiar to most people, it’s the way procedural programming handles recursion.

The most basic implementation of this pattern looks like this:

sub factorial (Int $n is copy --> Int) {
        my Int $result = 1;
        $result *= $n;
        goto MULT if $n > 0;
        return $result;

GOTO is not yet implemented in Raku, but it should be fairly obvious we can easily replace this with an existing keyword:

sub factorial (Int $n is copy --> Int) {
        my Int $result = 1;
        while $n > 0 {
                $result *= $n;
        return $result;

This does defeat the purpose of trying to use recursion, though. Therefore Raku offers the samewith keyword:

sub factorial (Int $n --> Int) {
        $n == 0 ?? 1 !! $n * samewith($n - 1);

There we go, recursion without incurring a thousand stack frames. I still think we’re missing something, though…


Everything is better with trampolines, with penguins, in space, or on ice.

A trampoline is a design pattern in Functional Programming. It is a little complicated compared to normal GOTO-style reduction, but in the right hands it can be very powerful.
The basics behind the trampoline pattern are as follows:

  • We can expect to do something with the value we’re computing.
  • We can just pass our TODO into the function that computes the value.
  • We can have our function generate its own continuation.
sub trampoline (Code $cont is copy) {
        $cont = $cont() while $cont;

So we pass the trampoline a function. That function is called. The function optionally returns a follow-up. As long as we get a follow-up, we keep calling it and assigning the result until we’re done.

It requires a little reworking of the factorial function:

sub factorial (Int $n, Code $res --> Code) {
        $n == 0 ?? $res(1) !! sub { factorial($n - 1, sub (Int $x) { $res($n * $x) }) }

To unpack that heap of stacked functions:

  • If $n is 0, we can just move on to the continuation.
  • Otherwise we return an anonymous function that calls factorial again.
  • The previous step propagates until we arrive at 0, where we get the result called with 1.
  • That multiplies the previous $n with 1, and propagates the result backwards.
  • Eventually the result is propagated to the outermost block and is passed into the continuation.

The way we would use the trampoline then follows:

trampoline(sub { factorial($n, sub (Int $x) { say $x; Nil }) });

Again, a bunch of tangled up functions to unpack:

  • We send an anonymous function to the trampoline that calls factorial with a number $n, and an anonymous continuation.
  • The continuation for the factorial is to say the result of the factorial and stop (the Nil).

Bonus round

Why would you use a trampoline for something that could be done easier with a regular for loop?

sub factorial-bounce (Int $n --> Code) {
        sub { factorial($n, sub ($x) { say $x, factorial-bounce($x) }) }

Day 2 – CRUD with Cro::HTTP, a tutorial


Today we will go through this tutorial about writing a simple CRUD service using Cro. For the impatient ones, link to the sources is at the end of the post.

Why would I want to read this piece of text and code?

  • A Cro::HTTP usage for a server-side application with authentication + authorization and a CRUD resource serving
  • Cro::WebApp templating usage
  • Cro::HTTP::Test usage
  • Setting up the services: Docker, nginx reverse proxy

Why would I want to read something else today?

  • An over-simplified mock in-memory database is used. Use whatever tool you find suitable for a reliable solution.
  • Project complexity is reduced to bare minimum for a server-side application: no smart javascript on the client, no user-friendly UX patterns.
  • This post covers a lot of basics and is not intended towards experienced users.

Let’s go let’s go let’s go

So we are writing a collective blog.

Users can register, login and logout. They can create new posts, see posts, edit and delete their posts.

Let’s start by stubbing a new project using Cro command line tool:

➜  CommaProjects> cro stub http rest-blog ./rest-blog
Stubbing a HTTP Service 'rest-blog' in './rest-blog'...

First, please provide a little more information.

Secure (HTTPS) (yes/no) [no]: 
Support HTTP/1.1 (yes/no) [yes]: 
Support HTTP/2.0 (yes/no) [no]: 
Support Web Sockets (yes/no) [no]: 
➜  CommaProjects> cd rest-blog/

As usual, we also want to initialize a git repo for our project:

$ git init
$ git add .
$ git commit -m 'Initial commit'

Let’s look at the structure of the created stub:

  • lib directory contains sources for the application itself. Right now, it only has a sample router with a single route declared.
  • META6.json contains description of our project.
  • service.p6 describes how to start our service. By default, it starts a Cro server on host and port specified by environment variables and serves requests until the user sends Ctrl-C.

To start the application, service.p6 can be run directly, but it is more flexible to edit .cro.yml file, which describes one or more services this project consists of. There, service.p6 is specified as a path for the entrypoint, so Cro command line tool runs the script according to the config for you.

Let’s try it out:

➜  rest-blog git:(master) ✗ cro run .
▶ Starting rest-blog (rest-blog)
🔌 Endpoint HTTP will be at http://localhost:20000/
📓 rest-blog Listening at http://localhost:20000
📓 rest-blog [OK] 200 / - ::1

As the service is up, you can visit localhost:20000 in your browser and see Cro’s Lorem Ipsum.

As everything is set, let’s digging in.


Let’s start with writing Blog::Database class. We create a file Database.pm6 in new directory lib/Blog, so that the full path is lib/Blog/Database.pm6. If you are using Comma IDE, the process is even simpler. Don’t forget to add new entry to provides section of META6.json file. We will deal with users and posts:

#| A mock in-memory database.
class Blog::Database {
    has %.users;
    has %.posts;


As you see, users and posts are defined as hashes. The contents will be:

  • User contains: user ID, username, password
  • Post contains: post ID, title, body, ID of author and date of creation

As for users, we need a way to add a user (registration), obtain a user by ID (from a session) or by username (on login). Not so much here:

method add-user(:$username, :$password) {
    my $id = %!users.elems + 1;
    %!users{$id} = { :$id, :$username, :$password }

multi method get-user(Int $id) { %!users{$id} }

multi method get-user(Str $username) { %!users.values.first(*<username> eq $username) }

We use current hash size to produce new IDs, and getters are implemented as trivial operations on the hash.

Posts are our CRUD resource, so we want to have more methods:

  • Create
method add-post(:$title, :$body, :$author-id) {
    my $id = %!posts.elems + 1;
    %!posts{$id} = { :$id, :$title, :$body, :$author-id, created => now }
  • Read
    method get-post(UInt $id) { %!posts{$id} }
  • Update
    method update-post($id, $title, $body) {
        %!posts{$id}<title> = $title;
        %!posts{$id}<body> = $body
  • Delete
    method delete-post($id) { %!posts{$id}:delete }

With this under our belt, we can proceed.


There are plenty of articles explaining the authorization vs authentication topic, so here we will look at how it works from the Cro user perspective.

Firstly, we need to define a Session class. A session holds the current data about the user on the server-side. For each new client, our service creates a new session object and sends back to the client a special “key” (session ID), saying “This is your session key, don’t you dare to drop it somewhere!”. Thus, the client knows nothing about its particular session, but it knows how to say “I want this page, oh, and by the way, here is the key you gave me, maybe there will be more candies just for me!”.


The server knows how to correspond keys to particular session objects and can decide what to do with the request based on its data.

Let’s define a very simple session class in Blog::Session:

use Cro::HTTP::Auth;

class Blog::Session does Cro::HTTP::Auth {
    has $.user-id is rw;

    method logged-in { $!user-id.defined }

subset LoggedIn of Blog::Session is export where *.logged-in;

Our class has to do Cro::HTTP::Auth role to be recognized by Cro as a session holder class. We also store user’s ID in an attribute and provide a method to check if the user is logged in: if the user has an ID, then this is definetely not some anonymous lurking around.

We also provide a handy subset for the created type (LoggedIn is a subset of Blog::Session where logged-in method returns True).

There are different ways to set “keys” (cookies, headers etc) and Cro supports various setups as well (in-memory storage, persistent storage, redis storage, more can be added), but for the sake of simplicity we will use in-memory, cookiebased session support.

So, secondly, How do we enable it? Our server takes a request from the network, parses it, then passes it for processing, and a response is sent back. Somewhere in-between we need to add something that will:

  • For new users, create a session and add “This is your key, brave one!” to the response
  • For users with keys, retrieve a session and tell “This is a session data of the user!” to the router

There are numerous places where we can add such a piece of software working in the middle, such a middleware.

First “normal” place is server-level, second “normal” place is route-level. There are different pros and cons for them, but this time we will go to service.p6 and add one to our server:

    application => routes(),
    before => [
                expiration => * 15),
                cookie-name => 'XKHxsoOwMNdkRrgqVFaB');
    after => [

Don’t forget to import our Blog::Session class.

Along with other options passed to Cro::HTTP::Server constructor, such as host, port and application to serve, we specify before argument containing a list of middleware we want to apply. We configure Cro::HTTP::Session::InMemory with our session class as a type parameter, saying “I want to work with session objects of this type”. We also specify name of the cookie and when it expires so the user needs to login again. The expiration period is reset on every new request from the user, so the users actively browsing a site won’t see a sudden “Login” page.

Why are we adding it on server-level instead of router-level? It’s a Surprise Tool that will help us later!

While we are in service.p6, it would be handy to create an application-wide database and pass it to our router.

Create a new Blog::Database object and pass it to routes subroutine, along with that patching its signature to have a parameter. In a more complex application we can connect to a persistent database here, do various checks etc.

Now it is finally time to write some router code!

Routing: Principles

In our application we have two modules, Auth and Blog, which are responsible for authentication and blogging features respectively. While they are not too big by itself, we will separate them into different modules for demonstrative purposes.

As described in an article about The Cro Approach, a Web application built with Cro::HTTP is just a bi-directional pipeline from “network input” into “network output”. All the underneath business like parsing is done for the user already.

When a pipeline is set up (which is done with the Cro::HTTP::Server usage in service.p6 entry point) and the middleware is in place, the “core” of our application is a router.

Speaking from a high-level point of view, a router is something that takes requests and emits responses.

One can write a router whatever way is suitable as long as the constraints are met, but for most applications using a handy route subroutine and bunch of helper subroutines is more than enough to get stuff done.

As you can see in the stub project we have, our Blog::Routes module already contains a single sample route that serves a dummy we saw before.

To make our application useful, we will add more routes. For detailed description of API refer to Cro::HTTP::Router documentation.

Routing: The Beginning

I like my modules to be kept in order. As we are writing a blog app, naturally the blog router should be in Blog::Routes module, but the stub greets us with just Routes. Just move the file into a new directory and adjust META6.json data (or just drag and drop the file if using Comma).

Now, let’s adjust its contents:

use Cro::HTTP::Router;

sub routes($db) is export {
    route {
        after { redirect '/auth/login', :see-other if .status == 401 };

        get -> 'css', *@path {
            static 'static-content/css', @path

We replaced default route with a couple of lines.

The call to after subroutine with a block adds a new middleware on a router level. For each response the block is executed with it being a topic, and the middleware checks status code of a response. If it is 401 (Unauthorized), we set a redirect to our (future) login page.

The second subroutine call is a definition of route that will serve static content – our CSS files. For our HTML pages to look less sad, we’ll use Bootstrap toolkit, so we create create static-content/css directory in project’s root and add the bootstrap.min.css file there. The file can be obtained from official Bootstrap framework page, various CDN services or whatever way you might want to serve styles. Of course, the layout is up to you and it is nowhere near necessary.

Routing: The Auth

Let’s create a new router for auth-related routes.

Create Blog::Routes::Auth module with auth-routes subroutine declared, which returns a result of route call:

use Cro::HTTP::Router;

sub auth-routes(Blog::Database $db) is export {
    route {
        # Routes will be here!

It has no routes for now, but we already can include it into our “main” router. Let’s add it into Blog::Routes module:

use Blog::Routes::Auth;

sub routes(Blog::Database $db) is export {
    route {
        include auth => auth-routes($db);

To include a router we use include, which should be easy enough to remember!

If this call looks like a magic, we can rewrite it as:

include(auth => auth-routes($db));

Which is just a call with a named argument. The key can be a string or a list of strings, and defines a prefix for each route from the router included. The value is just a call to our auth-routes, which creates a new router.

We also pass the $db argument, as we certainly want to work with our models in routes of the new router.

Before a jump into the routers implementation, we have one more question to look at…

Cro::WebApp template

Cro::HTTP is not a web framework. But it can be one. How?

It gives you ability to respond to HTTP requests, and does not tie you with its own decisions about “How” you do that.

  • Do you want to model your data? Just model it Whatever the way you want.
  • Do you want to serve HTML to your users? Just prepare it Whatever the way you want.
  • Do you want to work with requests and responses? Leave this to Cro::HTTP!

The one thing we did not discuss yet is HTML templating. Indeed, aside from getting request data from our users, we need to greet them with some nice pages before. To do this, we will use Cro::WebApp module.

It is a templating engine with syntax close to Raku, thus needs some time to get used to it. It is very recommended to glance over its documentation page before reading the templates code.

The templates code is deliberately not included in this post for numerous reasons (nobody likes boring HTML and everybody likes templating even less), but is available in the code repo.

Routing: The Auth Strikes Back.

Our registration page URL will look like /auth/register. It accepts GET and POST requests. Finally, the code:

sub auth-routes(Blog::Database $db) is export {
    route {
        get -> Blog::Session $session, 'register' {
            template 'register.crotmp', { :logged-in($session.user-id.defined), :!error };

        post -> Blog::Session $session, 'register' {
            request-body -> (:$username!, :$password!, *%) {
                with $db.get-user($username) {
                    template 'register.crotmp', { error => "User $username is already registered" };
                } else {
                    $db.add-user(:$username, :password(argon2-hash($password)));
                    redirect :see-other, '/auth/login';

The first call to get creates a handler for GET request to /auth/register URL. The auth piece is a default prefix in this router, as we specified it as a named argument on inclusion.

It calls template from Cro::WebApp module to render our template with the data specifies in second argument. The first argument to the handler block, $session, is not related to URL pieces and specifies that this handler needs a session object for this user to handle.

The second route is for POST request to the same URL. It uses request-body to unpack form data into variables. Next lines check if the user already exists, and present an error in this case, and otherwise create a new user. Don’t forget to hash the password! When new user account is created, we set a redirect to the login page.

The request-body is smart enough to without any changes parse a request data based on content type, be it json, plain form, multipart form data or whatever content type you can implement a handler for.

Login page is very similar: GET returns a template, POST collects data and processes it, with a twist:

post -> Blog::Session $session, 'login' {
    request-body -> (:$username!, :$password!, *%) {
        my $user = $db.get-user($username);
        with $user {
            if (argon2-verify($_<password>, $password)) {
                $session.user-id = $_<id>;
                redirect :see-other, '/';
            } else {
                template 'login.crotmp', { :!logged-in, error => 'Incorrect password.' };
        } else {
            template 'login.crotmp', { :!logged-in, error => 'Incorrect username.' };

While almost everything is similar and thus not so hard to grasp, we can see that this route handler actually uses $session object to assign a user ID on login.

Nothing else need to be done, Cro::HTTP will take care of preserving this session in a storage and on next requests from this user, given the session key is passed, the handler will be able to check if the user is logged in and if yes, what’s the ID.

Everything else here is typical: request-body to parse a form, template, redirect and Raku code.

As for logging out, the code is pretty short as well:

get -> Blog::Session $session, 'logout' {
    $session.user-id = Nil;
    redirect :see-other, '/';

Here, we can erase the session object data whatever the way we want, and then redirect.

Routing: The Blog

Aside from writing boring templates, now we should have a simple application with an ability to create new users and log in.

But when the users are redirected to index page of our site, a sad error welcomes them. Let’s make it more welcoming!

This calls for a new module, Blog::Routes::Blog.

Once again, include it into our main router with a simple:

use Blog::Routes::Blog;
include blog-routes($db);

Note that we don’t pass a named argument. The reason is that while we want blog-related routes to be served under /blog prefix, this router will also handle index page, /, without a prefix. Instead, we can do a simple trick later.

At index page we show posts of all users. Firstly, we need to define a method on our Blog::Database to collect all info we need:

method get-posts {
        $_<username> = %!users{$_<author-id>}<username>;

While it may look a bit cryptic, in fact we just imitate SQL JOIN clause, because we want to show author’s username along with the post, not just ID.

It can be read this way:

  • For %!posts hash, take all values =>
  • For each value, which is a hash itself, add a new item =>
  • The item key is username, the item value is a username value of %!users item obtained by author-id key that is stored in the post records =>
  • We don’t use explicit return, and implicitly a last result of block execution is returned. As assignment of a new hash key returns value of assigned item instead of hash, we need a single $_; to return the hash =>
  • Sort all entries by their creation date.

With this in our hands we can write a handler for the index page. Alas, nothing interesting awaits us there:

get -> Blog::Session $session {
    my $user = $session.logged-in ?? $db.get-user($session.user-id) !! {};
    $user<logged-in> = $session.logged-in;
    my $posts = ${
        $_<created> =$_<created>).Str;
    template 'index.crotmp', { :$user, :$posts };

With the session object available and our mighty database, we gather the data and push it into a template. Nice!

As we have R part of CRUD now, we need to plan the rest (not The REST this time!): create, edit and deletion.

The URL for each action will start with /blog prefix. Do we need to create another router module to not write out this annoying prefix for each route handler? Maybe yes, but maybe not. For this case, let’s just inline include. Or was it include inline?

Whatever the way it is:

include <blog> => route {
    get -> ...
    post -> ...

As we just called our *-routes subroutines, we can just omit this layer of indirection, sacrificing four spaces of indentation.

(by the way, there is no obligation for the *-routes naming scheme usage, but it is easy to remember and use)

After looking at register route handler, the post creation one is typical: get will serve a template with a form, while post will parse the form with request-body, do a call to DB to save the post and make a redirect.

The next two routes are update and delete. Let’s write them up:

post -> LoggedIn $session, UInt $id, 'update' {
    with $db.get-post($id) -> $post {
        if $post<author-id> == $session.user-id {
            request-body -> (:$title!, :$body!) {
                $db.update-post($id, $title, $body);
                redirect :see-other, '/';
        } else {
    } else {

post -> LoggedIn $session, UInt $id, 'delete' {
    with $db.get-post($id) -> $post {
        if $post<author-id> == $session.user-id {
            redirect :see-other, '/';
        } else {
    } else {

Note we used LoggedIn subset as a type for the $session object. During routing a request, its session object will be checked to met the requirement (in this case, for the user to be logged in) and if not, Unauthorized response will be formed.

Now look at the code closely, I am seeing it’s coming…

When in Rome, do as the Romans do, they say, and, indeed, when writing code in Raku THIS insane amount of boilerplate is just ridiculous! I demand the gods and goddesses and even Santa Claus himself we want and can do better than this!

And with the language and libraries brought to us by awesome contributors from all around the globe, let’s make it neater:

#| A helper for executing code blocks
#| only on posts one can access
sub process-post($session, $id, &process) {
    with $db.get-post($id) -> $post {
        if $post<author-id> == $session.user-id {
        } else {
    } else {

Let’s take a session, $id of the post and the action to do. If the post exists, check if the user has rights to modify it. All’s ok? Execute the code! Something is wrong? Notify the user about that!

Now we can re-write the POST routes above as:

post -> LoggedIn $session, UInt $id, 'update' {
    process-post($session, $id, -> $ {
        request-body -> (:$title!, :$body!) {
            $db.update-post($id, $title, $body);
            redirect :see-other, '/';

post -> LoggedIn $session, UInt $id, 'delete' {
    process-post($session, $id, -> $ {
        redirect :see-other, '/';

Even now I want to discuss with Santa if it is worth anything to factor out the redirect call into our helper subroutine. My answer: nope.

The point, hopefully, taken here, is that one can flexibly factor out the logic of processing requests. And roles in application. And cookies. Om-nom-nom.

Setting nginx as a reverse proxy

Let’s say you want to hide your application behind an nginx reverse proxy. Be it load balacing, free caching or something else, there are reasons to do it. As the application we made can be served using its native tools, there are not so much configuration to be done to achieve this.

The prerequisite for this is to have nginx installed on your server.

As a next thing, you run it using the Cro command line tool runner, and armed with a port to work with, you can modify the server section of your nginx config (in the simplest case, the location on GNU/Linux systems is /etc/nginx/nginx.conf):

server {
    listen       80;
    server_name  localhost;

    location / {
        proxy_pass http://localhost:20000/;

As a next step, you check the resulting config is correct using nginx -t command and reload the server using nginx -s reload.

Given your application is up and running, you should be able to visit localhost and see the main page.

A lot of other things might be done: to write a unit for easy managing your service in case of failures or machine rebootes, your nginx config might be much more interesting, as well as HTTPS support might be added (which is highly recommented) as our service has auth pieces and sending the password over plain HTTP is dangerous.

Building a docker image

So services are cool, but the thing everyone talks about now is Docker and Kubernetes. Care to containerize your app? Think of a nice name and execute this command using it in the root directory of your project:

docker build -t $my-cool-app-name-here .

That’s all! A container is prepared for you and you can manage it as you wish.


In this rather long tutorial we discussed some basic topics:

  • Structure for a small-to-medium Cro application.
  • Authorization and authentication parts in general and implementation-specific examples.
  • Implementation of commonly written route handlers.
  • Serving and deploying of your application.

Of course, there are many more features available along with cool tricks, yet this goes far beyond this already long post.

The full sources including templates are available here.

Congratulations on finishing this tutorial! As December came, I wish you to have a hot drink and a nice day.

Day 1 – Raku from Perl: Transforming Old Perl Code


I have been using Raku (Perl 6’s new name) since mid-2015 and really appreciate its nice features for programmers who may be lazy, non-touch typists, amateurs, old Perl lovers, non-parallel users, or wannabe hackers including:

  • kebab-case names
  • brace-less control statements
  • easy class construction
  • lexical block variables
  • copious built-in routines
  • use unicode natively
  • powerful, easy-to-use function signatures

but one of its featured non-core modules has really come into play for me recently, so I am highlighting it as a surprise Raku gift for me this year: the Raku module Inline::Perl5, written by Stefan Seifert (IRC #raku: ‘nine’; Github: ‘niner’).

Before proceeding, please note the new Raku links for the Raku home page, Raku docs, and Raku modules.

NOTE: At the moment I only use Raku on Debian Linux hosts, so I can’t help if you have any problems on Windows or Mac OSX with any of the following.


I describe myself as a pragmatic programmer (i.e., getting the job done ASAP with no frills), with little formal programming training except during college in the main-frame, batch job era, and later, off-duty, while in my last job in the US Air Force. (See this document for more context.)

Soon after being hired in mid-1993 at my last civilian employer (another US DoD contractor, from whom I retired on 2016-01-011), I discovered Perl 4 and found it was the ideal language to create the tools I needed for our small local office to move from an intensive manual process to a more automated one (I was then using C heavily, and Perl was the first interpreted language I had used since Basic). Over the years, I eventually moved to Perl 5 (much later than I should have) and continued to grow and improve my company’s software tool box (much written at home on my own time), now on Redhat Linux computers, all in Perl, which included automatic image generation and document production (using Perl to write PostScript converted to PDF). The documents produced enabled my team of analysts to see standard results plots, tables, and other generated metrics and thus had more time and could easily write their detailed analyses which were then incorporated into the final products.

In addition, I built other products for my personal use including a personalized calendar with integrated database, Christmas Card address database with label maker, and several websites. In sum, I have a lot of old as well as more recent Perl code at my house!.

2015 and Raku

I had always hoped to see Raku (Perl 6) coming soon, because the existing language seemed to be a little clunkier than it could be, but CPAN and its wonderful module authors, especially Damian Conway, helped mitigate the bumps.

So I was very happy to join in the -Ofun when I checked on the progress of Raku in mid-2015 and saw the impending initial stable release. I immediately started trying to convert some of my Perl products to pure Raku starting with some of the CPAN modules important to me in my personal projects. The first was Perl’s Geo::Ellipsoid which was a real learning experience and took much longer than I thought. Eventually I published eight pure Raku modules to CPAN.

Fast forward to 2019

When I started porting my own tools to Raku this year the real fun began. When I built the original tools, much of it was done in a rush with little time for thought and design, and very little testing, and certainly not a test suite. Consequently, I had lots of ugly code sitting around ready to be ported to Raku. To paraphrase Dr. Strangelove [Ref. 1], I stopped worrying about the mess and started working on a Raku port.

Part 1: Preliminary testing

I first started with porting modules and then the programs that used them but found that to be far too labor intensive in many cases. I encountered problems with lack of signatures, much use of GOTOs, global variables in long main programs (a.k.a. scripts) with lots of subroutines, etc.

So I finally, just this year, decided to try using Inline::Perl5 to simplify my chore. I changed my porting process to:

  1. Move existing subroutines in Perl programs to Perl modules.
  2. Ensure the Perl programs continue to work as expected after step 1.
  3. Port the now-much-shorter Perl programs to Raku, a much easier task than before.

Before I seriously started I created a Raku script to find all my Perl files (using File::Find with regex /['.pm'|'.pl']$/`), read them line-by-line, and write them out again to see if there were any issues handling them with Raku, and I certainly did: in some of my very old code (mid 1990s) I got errors about malformed utf8 like this:

ERROR: something failed in file '': Malformed UTF-8

I tried several methods to isolate the bad lines, including using the *nix tool od but that was painfully slow and visual inspection with vim didn’t always work. (I didn’t get around to using either Emacs or comma since I was doing the work remotely, so I don’t know if that would help.) Luckily, I stumbled on a trick while I was using a limited set of files for testing when I used this fragment in my program

try { my $string = slurp $infile }
if $! {
    note "Error attempting to slurp file '$infile'";
    note "$!";

and a UTF-8 error was detected I would get an error message like

Error attempting to slurp file ''
Malformed UTF-8 at line 179 col 66

which enabled me to easily see the problem character in the original file and change it to valid UTF-8.

Importing Perl modules into both Perl and Raku

When modifying the existing Perl modules to be used by Perl as well as Raku I found two final problems that overlap:

  1. In the Perl programs and their Perl modules, how does one handle sets of
    global variables found missing when the programs’ subroutines are
    moved into an existing or new Perl module?
  2. How does one export subs and vars from the Perl module into both Perl
    and Raku programs?

Problem 1: Global variables

Inline::Perl5 doesn’t currently describe accessing variables, and, of course, such practices are not recommended at all, but, with help from the author (Stefan Seifert), I found a way. We start with an example Perl module to be used with both Perl and Raku programs, say, which looks like this (file

package P5;

use feature 'say';
use strict;
use warnings;

#| The following module does NOT affect exporting to Raku, it only
#| affects exporting to Perl programs. See program `` for
#| examples.
use Perl6::Export::Attrs; #= [from CPAN] by Damian Conway

our $VERSION = '1.00';

#| Always exported (no matter what else is explicitly or implicitly
#| requested):
our %h :Export(:MANDATORY);
our $pa :Export(:MANDATORY);

#| Export $pb when explicitly requested or when the ':ALL' export set
#| is requested.
our $pb :Export(:DEFAULT :pb);

#| Always exported:
sub set_vars :Export(:MANDATORY) {
    %h = ();
    $h{a} = 2;
    $pa = 3;
    $pb = 5;

#| Always exported (no matter what else is explicitly or implicitly
#| requested):
sub sayPA :Export(:MANDATORY) {
    say "  \$pa = $pa";

#| Always exported:
sub sayPB :Export(:DEFAULT :sayPB) {
    say "  \$pb = $pb";

#| Always exported:
sub sayH :Export(:MANDATORY) {
    foreach my $k (sort keys %h) {
        my $v = $h{$k};
        say "  key '$k', value '$v'";
1; #= mandatory true return

Problem 2: Exporting global variables

As noted in module ``, the export information provided by `Perl6::Export::Attrs` is only for the use of Perl code using `` (it will not affect Raku programs using ``). However, inserting `use Perl6::Export::Attrs;` in any Perl module greatly eases the task of exporting as desired without a lot of boiler plate Perl code. One doesn’t have to use it, but I highly recommend it. A bonus is that eventually porting the Perl module to Raku will be easier.

Perl programs using module P5

One can access the objects in the Perl module in a Perl program like this (file

#!/usr/bin/env perl
use feature 'say';
use strict;
use warnings;

use lib qw(.);
use P5 qw($pb sayPB); # <== notice the explicit requests


my %h = %P5::h;
say "Current globals in P5:";
foreach my $k (sort keys %h) {
    my $v = $h{$k};
    say "  key '$k', value '$v'";

say << "HERE";

Modify current globals in P5:
  \$P5::h{a} = 3
  \$P5::h{c} = 5 # a new key/value pair
  \$P5::pa = 4
  \$P5::pb = 6

$P5::h{a} = 3;
$P5::h{c} = 5;
$P5::pa = 4;
$P5::pb = 6;

say "Revised globals in P5:";

Raku programs using module P5

And one can access the Perl module’s objects in a Raku program like this (file use5.raku)

#!/usr/bin/env perl6

#| Notice no explicit use of Inline::Perl5, but it
#| must be installed.
use lib:from '.'; #= Must define the Perl lib location with this syntax.
use P5:from;      #= Using the Perl module.

#| =========
#| Bind the hash variable so we can modify the hash.
#| For access only, use of the '=' alone is okay.

my %h := %*PERL5;

say "Current globals in P5:";
for %h.keys.sort -> $k {
    my $v = %h{$k};
    say "  key '$k', value '$v'";

say qq:to/HERE/;

Modify current globals in P5:
  \%h = 3
  \%h = 5 # a new key/value pair
  \$P5::pa = 4
  \$P5::pb = 6

%h = 3;
%h = 5;

#| Need this syntax to access or modify a scalar:
$P5::pa = 4;
$P5::pb = 6;

say "Revised globals in P5:";

The three three test files all work together and provide a blueprint for working with my real code.

Part 2: Using real code

In this section I will be using files from one of my projects: my college class website (see it here). I started it in 2009 and have been adding to it and maintaining it often, so it has a lot of crufty Perl code. I have created a Github repository which contains the code I’ll be using in the following discussion. You can follow along by cloning it like this:

$ git clone

The code I will be using is in the raku-advent-extras/2019/ directory. The code should be sanitized so no non-public information is shown, and it will not be totally functional, but the main script,, should always run if executed without any arguments. Let the games begin!

Finding global variables

Using the syntax examples above in my real Perl modules, I first moved the obviously marked global variables in a Perl program to a new Perl module named with a single letter for easy use such as (for Global). For example, finding a variable $start_time in the main program I would rename it to $G::start_time and put it into the module as our $start_time.

Then I exercised the program repetitively, finding more global variables at each run, adding them to the module, and so on until all globals were found.

The first real files to work with after defining Perl global variables are the program file, and two Perl modules and and they will be used in the rest of this article. To get to a common starting point, in the git repo:

$ git checkout stage-0

and ensure the main script runs with no arguments:

$ ./
Usage: ./ -gen | -cvt [-final][-useborder][-usepics][-debug][-res=X]

Now start a new branch: $ git checkout -b stage-1.

Stage-1: Move all subs in the main program to a new Perl module

At this point I’m going to finish moving all the Perl subs in to a new module I’ll do it one at a time, execute the program to see if we have any problems, and so on until all (or most) subs are stashed in the new Perl module. The steps

  • Create
  • Add use OtherSubs to the program
  • Remove sub dequote (not needed)
  • Move sub Build_web_pages to

I got the following symbols missing:

Global symbol "$CL_HAS_CHANGED"...
Global symbol "$CL_WAS_CHECKED"...
Global symbol "$GREP_pledge_form"...
Global symbol "$USAFA1965"...
Global symbol "$USAFA1965_tweetfile"...
Global symbol "$debug"...
Global symbol "$dechref"...
Global symbol "$force_xls"...
Global symbol "$real_xls"...

After I resolved that issue, I continued to move subs, and resolve new global variables, until all were moved. You should see a commit message after each sub was successfully moved. I stopped with one sub left the program file, sub zero_modes, since it is part of the option handling and shouldn’t normally be in a module.

Stage-2: Port the Perl program to Raku

For this part I started a new branch from the stage-1 branch: $ git checkout -b stage-2.

I’m sure every Raku programmer will proceed to port a Perl program to Raku in a different way, but following is my general recipe.

  1. Copy the existing program, in this case, to an equivalent Raku name, manage-web-site.raku (see Notes 1 and 2 below).
  2. Change the shebang line to use perl6.
  3. Execute PROBLEMS!!

I got the following errors:

Could not find feature at line 3 in:

I then found one problem that I haven’t addressed in the general process: conflicting global symbols. That happened when I tried to use the Raku version of module Geo::Ellipsoid and some of the Perl versions were also using it. I solved the immediate problem by commenting out the Raku version and using the Perl version in the program file.

After I resolved that issue, I continued to remove or replace used modules, handle more global variables, find or ignore missing subroutines, replace =pod/=cut with =begin comment/=end comment, remove unneeded parens, use Raku idioms (e.g., Raku ‘for’ versus Perl ‘foreach’), and fix issues until all were resolved. You should see a commit message after each issue was successfully resolved. I also tried to clean up the code while I worked.

Stage-3: Tidying the Raku program

Finally, the program manage-web-site.raku runs (with no input arguments) with no errors. At this point I checked out a stage-3 branch for cleaning the program a bit: git checkout -b stage-3. I removed a lot of comments and removed parens. I also removed from use modules that aren’t now actually used in the program file after the subs were moved. Additionally, I made the help system a bit cleaner. I leave one obvious Raku feature to be added as an exercise for the user: in the ugly if/else blocks for option selection, change to use Raku’s when blocks.

We started with a file with about 6600 lines of ugly Perl code and finished with a Raku version, manage-web-site.raku. with less than 800 lines and a much cleaner look. We’re not finished with the port yet: we still have to test each option for proper functioning (and I’m sure there be dragons 🐉!). Ideally, we’ll also add tests in the process. But we don’t have all the necessary content for that, so we’ll stop at this point (but, follow me on my next steps in Part 2 of this post on Day 9).


You have seen one way to ease porting Perl code to Raku, and I hope it may help those who are considering moving to Raku see that it can be accomplished iteratively in smaller steps instead of taking great chunks of time. Part 2 of this post on Day 9 will try to take the next baby step and convert a Perl module to Raku and have it be used by both Perl and Raku callers.

I ❤️ ❤️ Raku! 😊

🎅 Merry Christmas 🎅 and 🥂 Happy New Year 🎉 to all and, in the immortal words of Charles Dickens’s Tiny Tim, may ✝ “God bless Us, Every One!” ✝ [Ref. 2]



  1. I actually started the file rename while accidentally in the stage-1 branch, sorry.
  2. The file extension of ‘.raku’ is the community-accepted convention for Raku executable programs. However, for the foreseeable future, its use (on *nix systems) depends on having the perl6 Rakudo compiler installed and one of two other conditions: (1) the user’s program file marked as executable with chmod x and having the proper shebang line as the first line of the file or (2) executing the program as perl6 myprog.raku. Sometime hopefully soon, when the Rakudo compiler’s executable is available as raku and it is installed on your system, replace perl6 in the instruction above with raku. (Windows and Mac users will have to get their instructions from other sources.)


  1. Movie (1964): Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (see
  2. A Christmas Carol, a short story by Charles Dickens (1812-1870), a well-known and popular Victorian author whose many works include The Pickwick Papers, Oliver Twist, David Copperfield, Bleak House, Great Expectations, and A Tale of Two Cities.

Raku modules used (install with zef)

  • Inline::Perl5

Perl modules used from CPAN (install with cpanm)

  • Perl6::Export::Attrs