All the blogs posts of 2021


It’s that time of the year

When we start all over again with advent calendars, publishing one article a day until Christmas. This is going to be the first full year with Raku being called Raku, and the second year we have moved to this new site. However, it’s going to be the 12th year (after this first article) in a row with a Perl 6 or Raku calendar, previously published in the Perl 6 Advent Calendar blog. And also the 5th year since the Christmas release, which was announced in the advent calendar of that year.

Anyway. Here we go again! We have lined a up a full (or eventually full by the time the Advent Calendar is finished) set of articles on many different topics, but all of them about our beloved Raku.

So, enjoy, stay healthy, and have -Ofun reading this nice list of articles this year will be bringing.

Day 25 – Future-proofing the Raku Programming Language

Around this time last year, Jonathan Worthington was writing their Advent Post called Reminiscence, refinement, revolution. Today, yours truly finds themselves writing a similar blog post after what can only be called a peculiar year in the world.

The Language

Visible Changes

The most visible highlights in the Raku Programming Language are basically:

last / next with a value

use v6.e.PREVIEW;
say (^5).grep: { $_ == 3 ?? last False !! True } # (0 1 2)
say (^5).grep: { $_ == 3 ?? last True  !! True } # (0 1 2 3)

Normally, last just stops an iteration, but now you can give it a value as well, which can be handy e.g. in a grep where you know the current value is the last, but you still want to include it.

use v6.e.PREVIEW;
say (^5).map: { next    if $_ == 2; $_ } # (0 1 3 4)
say (^5).map: { next 42 if $_ == 2; $_ } # (0 1 42 3 4)

Similarly with map, if you want to skip a value (which was already possible), you can now replace that value by another value.

Note that you need to activate the upcoming 6.e Raku language level to enable this feature, as there were some potential issues when activated in 6.d. But that’s just one example of future proofing the Raku Programming Language.


The .pick(*) call will produce all possible values of the Iterable on which it is called in random order, and then stop. The .pick(**) will do the same, but then start again producing values in (another) random order until exhausted, ad infinitum.

.say for (^5).pick(* ); # 3␤4␤0␤2␤1␤
.say for (^5).pick(**); # 3␤4␤0␤2␤1␤0␤2␤1␤4␤3␤3␤4␤2␤1␤0␤....

Nothing essential, but it is sure nice to have 😀.

is implementation-detail trait

The is implementation-detail trait indicates that something that is publicly visible, still should be considered off-limits as it is a detail of the implementation of something (whether that is your own code, or the core). This will also mark something as invisible for standard introspection:

class A {
    method foo() is implementation-detail { }
    method bar() { }
.name.say for A.^methods; # bar␤BUILDALL␤

Subroutines and methods in the core that are considered to be an implementation-detail, have been marked as such. This should make it more clear which parts of the Rakudo implementation are game, and which parts are off-limits for developers (knowing that they can be changed without notice). Yet another way to make sure that any Raku programs will continue to work with future versions of the Raku Programming Language.

Invisible Changes

There were many smaller and bigger fixes and improvements “under the hood” of the Raku Programming Language. Some code refactoring that e.g. made Allomorph a proper class, without changing any functionality of allomorphs in general. Or speeding up by using smarter algorithms, or by refactoring so that common hot code paths became smaller than the inlinining limit, and thus become a lot faster.

But the BIG thing in the past year, was that the so-called “new-disp” work was merged. In short, you could compare this to ripping out a gasoline engine from a car (with all its optimizations for fuel efficiency of 100+ years of experience) and replacing this by an electrical engine, while its being driven running errands. And although the electrical engine is already much more efficient to begin with, it still can gain a lot from more optimizations.

For yours truly, the notion that it is better to remove certain optimizations written in C in the virtual machine engine, and replace them by code written in NQP, was the most eye-opening one. The reason for this is that all of the optimization work that MoarVM does at runtime, can only work on the parts it understands. And C code, is not what MoarVM understands, so it can not optimize that at runtime. Simple things such as assignment had been optimized in C code and basically had become an “island” of unoptimization. But no more!

The current state of this work, is that it for now is a step forward, but also a step back in some aspects (at least for now). Some workflows most definitely have benefitted from the work so far (especially if you dispatch on anything that has a where clause in it, or use NativeCall directly, or indirectly with e.g. Inline::Perl5). But the bare startup time of Rakudo has increased. Which has its effect on the speed with which modules are installed, or testing is being done.

The really good thing about this work, is that it will allow more people to work on optimizing Rakudo, as that optimizing work can now be done in NQP, rather than in C. The next year will most definitely see one or more blog posts and/or presentations about this, to lower the already lowered threshold even further.

In any case, kudos to Jonathan WorthingtonStefan SeifertDaniel GreenNicholas Clark and many, many others for pulling this off! The Virtual Machine of choice for the Raku Programming Language is now ready to be taken for many more spins!

The Ecosystem

Thanks to Cro, a set of libraries for building reactive distributed systems (lovingly crafted to take advantage of all Raku has to offer), a number of ecosystem related services have come into development and production.

zef ecosystem

The new zef ecosystem has become of age and is now supported by various developer apps, such as App::Mi6, which basically reduces the distribution upload / commit process to a single mi6 release↵. Recommended by yours truly, especially if you are about to develop a Raku module from scratch. There are a number of advantages to using the zef ecosystem.

direct availability

Whenever you upload a new distribution to the zef ecosystem, it becomes (almost) immediately available for download by users. This is particularly handy for CI situations, if you are first updating one or more dependencies of a distribution: the zef CLI wouldn’t know about your upload upto an hour later on the older ecosystem backends.

better ecosystem security

Distributions from the older ecosystem backends could be removed by the author without the ecosystem noticing it (p6c), or not immediately noticing it (CPAN). Distributions, once uploaded to the zef ecosystem, can not be removed.

more dogfooding

The zef ecosystem is completely written in the Raku Programming Language itself. And you could argue that’s one more place where Raku is in production. Kudos to Nick Logan and Tony O’Dell for making this all happen!


raku.land is a place where one can browse the Raku ecosystem. A website entirely developed with the Raku Programming Language, it should be seen as the successor of the modules.raku.org website, which is not based on Raku itself. Although some of the features are still missing, it is an excellent piece of work by James Raspass and very much under continuous development.

Not forgetting the past

“Those who cannot remember the past are condemned to repeat it.” George Santanaya has said. And that is certainly true in the context of the Raku Programming Language with its now 20+ year history.

Permanent Distributions

Even though distributions can not be removed from the zef ecosystem, there’s of course still a chance that it may become unavailable temporarily, or more permanently. And there are still many distributions in the old ecosystems that can still disappear for whatever reason. Which is why the Raku Ecosystem Archive has been created: this provides a place where (ideally) all distributions ever to be available in the Raku ecosystem, are archived. In Perl terms: a BackPAN if you will. Before long, this repository will be able to serve as another backend for zef, in case a distribution one needs, is no longer available.

Permanent Blog Posts

A lot of blog post have been written in the 20+ year history of what is now the Raku Programming Language. They provide sometimes invaluable insights into the development of all aspects of the Raku Programming Language. Sadly, some of these blog posts have been lost in the mists of time. To prevent more memory loss, the CCR – The Raku Collect, Conserve and Remaster Project was started. I’m pretty sure a Cro-driven website will soon emerge that will make these saved blog posts more generally available. In the meantime, if you know of any old blog posts not yet collected, please make an issue for it.

Permanent IRC Logs

Ever since 2005, IRC has been the focal point of discussions between developers and users of the Raku Programming Language. In order to preserve all these discussions, a repository was started to store all of these logs, up to the present. Updating of the repository is not yet completey automated, but if you want to search something in the logs, or just want to keep up-to-date without using an IRC client, you can check out the experimental IRC Logs server (completely written in the Raku Programming Language).

Looking forward

So what will the coming year(s) bring? That is a good question.

The Raku Programming Language is an all volunteer Open Source project without any big financial backing. As such, it is dependent on the time that people put into it voluntarily. That doesn’t mean that plans cannot be made. But sometimes, sometimes even regularly, $work and Real Life take precedence and change the planning. And things take longer than expected.

If you want to make things happen in the Raku Programming Language, you can start by reporting problems in a manner that other people can easily reproduce. Or if it is a documentation problem, create a Pull Request with the way you think it should be. In other words: put some effort into it yourself. It will be recognized and appreciated by other people in the Raku Community.

Now that we’ve established that, let’s have a look at some of the developments now that we ensured the Raku Programming Language is a good base to build more things upon.

new-disp based improvements

The tools that “new-disp” has made available, haven’t really been used all that much yet: the emphasis was on making things work again (after the engine had been ripped out)! So one can expect quite a few performance improvements to come to fruition now that it all works. Which in turn will make some language changes possible that were previously deemed too hard, or affecting the general performance of Raku too much.


Jonathan Worthington‘s focus has been mostly on the “new-disp” work, but the work on RakuAST will be picked up again as well. This should give the Raku Programming Language a very visible boost, by adding full blown macro and after that proper slang support. While making all applications that depend to an extent on generating Raku code and then executing it, much easier to make and maintain (e.g. Cro routing and templates, printf functionality that doesn’t depend on running a grammar every time it is called).

More Cro driven websites

It looks like most, if not all Raku related websites, will be running on Cro in the coming year. With a few new ones as well (no, yours truly will not reveal more at this moment).

A new language level

After the work on RakuAST has become stable enough, a new language level (tentatively still called “6.e”) will become the default. The intent is to come with language levels more frequently than before (the last language level increase was in 2018), targeting a yearly language level increase.

More community

The new #raku-beginner channel has become pretty active. It’s good to see a lot of new names on that channel, also thanks to a Discord bridge (kudos to Wenzel P.P. Peppmeyer for that).

The coming year will see some Raku-only events. First, there is the Raku DevRoom at FOSDEM (5-6 February), which will be an online-only event (you can still sign up for a presentation or a lightning talk!). And if all goes ok, there will be an in-person/online hybrid Raku Conference in Riga (August 11-13 2022).

And of course there are other events where Raku users are welcome: the German Perl/Raku Workshop (30 March/1 April in Leipzig), and the Perl and Raku Conference in Houston (21-25 June).

And who knows, once Covid restrictions have been lifted, how many other Raku related events will be organized!


This year saw the loss of a lot of life. Within the Raku Community, we sadly had to say goodbye to Robert Lemmen and David H. Adler. Belated kudos to them for their contributions to what is now the Raku Programming Language, and Open Source in general. You are missed!

Which brings yours truly to instill in all of you again: please be careful, stay healthy and keep up the good work!

Day 24 – Packaging and unpackaging real good

After all Rakuing along all Christmas, Santa realizes it’s a pretty good idea to keep things packed and ready to ship whenever it’s needed. So it looks at containers. Not the containers that might or might not actually be doing all the grunt work for bringing gifts to all good boys and girls in the world, but containers that are used to pack Raku and ship it or use it for testing. Something you need to do sooner or later, and need to do real fast.

The base container

The base container needs to be clean, and tiny, and contain only what’s strictly necessary to build your application on. So it needs a bunch of binaries and that’s that. No ancillary utilities, nothing like that. Enter jjmerelo/raku, a very bare bones container, that takes 15 MBytes and contains only the Rakudo compiler, and everything it needs to work. It’s also available from GHCR, if that’s more to your taste.

You only need that to run your Raku programs. For instance, just print all environment variables that are available inside the container:

time podman run --rm -it ghcr.io/jj/raku:latest -e 'say %*ENV'

Which takes around 6 seconds in my machine, most of it devoted to downloading the container. Not a bad deal, really, all things considered.

The thing it, it comes in two flavors. Alternative is called jj/raku-gha, for obvious reasons: It’s the one that will actually work in side GitHub actions, which is where many of you will eventually use it. The difference? Well, a tiny difference, but one that took some time to discover: its default user, called raku, uses 1001 as UID, instead of the default 1000.

Right, I could have directly used 1001 as the single UID for all of them, but then I might have to do some more changes for GitHub Actions, so why bother?

Essentially, the user that runs GitHub actions uses that UID. We want our package user to be in harmony with the GHA user. We achieve harmony with that.

But we want a tiny bit more.

We will probably need zef to install new modules. And while we’re at it, we might also need to use a REPL in an easier way. Enter alpine-raku, once again in two flavors: regular and gha. Same difference as above: different UID for the user.

Also, this is the same jjmerelo/alpine-raku container I have been maintaining for some time. Its plumbing is now completely different, but its functionality is quite the same. Only it’s slimmer, so faster to download. Again

time podman run --rm -it ghcr.io/jj/raku-zef:latest -e 'say %*ENV'

Will take a bit north of 7 seconds, with exactly the same result. But we will see an interesting bit in that result:

{ENV => /home/raku/.profile, HOME => /home/raku, HOSTNAME => 2b6b1ac50f73, PATH => /home/raku/.raku/bin:/home/raku/.raku/share/perl6/site/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin, PKGS => git, PKGS_TMP => make gcc linux-headers musl-dev, RAKULIB => inst#/home/raku/.raku, TERM => xterm, container => podman}

And that’s the RAKULIB bit. What I’m saying it is that, no matter what the environment says, we’re gonna have an installation of Raku in that precise directory. Which is the home directory, and it should work, right? Only it does not, because GitHub Actions change arbitrarily the HOME variable, which is where Raku picks it from.

This was again something that required a bit of work and understanding where Rakudo picks its configuration. If we run

raku -e 'dd $*REPO.repo-chain'

We will obtain something like this:

(CompUnit::Repository::Installation.new(prefix => "/home/jmerelo/.raku"),CompUnit::Repository::Installation.new(prefix => "/home/jmerelo/.rakubrew/versions/moar-2021.10/install/share/perl6/site"), CompUnit::Repository::Installation.new(prefix => "/home/jmerelo/.rakubrew/versions/moar-2021.10/install/share/perl6/vendor"), CompUnit::Repository::Installation.new(prefix => "/home/jmerelo/.rakubrew/versions/moar-2021.10/install/share/perl6/core"), CompUnit::Repository::AbsolutePath.new(next-repo => CompUnit::Repository::NQP.new(next-repo => CompUnit::Repository::Perl5.new(next-repo => CompUnit::Repository))), CompUnit::Repository::NQP.new(next-repo => CompUnit::Repository::Perl5.new(next-repo => CompUnit::Repository)), CompUnit::Repository::Perl5.new(next-repo => CompUnit::Repository))

We’re talking about the repository chain, where Raku (through Rakudo) keeps the information or where to find the, effectively, CompUnit repositories or the libraries, precompiled (those are the CompUnit::Repository::Installation) or not (CompUnit::Repository::AbsolutePath). But let’s look at the first one, which is where it will start looking. It’s effectively our home directory, or more precisely, a subdirectory where things are installed in the normal course of things. Where does Rakudo picks that from? Let’s change the HOME environment variable and we’ll see, or rather not, because depending on the installation, it will simply hang. With the RAKULIB defined as above, however, say $*REPO.repo-chain will print

(inst#/home/raku/.raku inst#/tmp/.raku inst#/usr/share/perl6/site inst#/usr/share/perl6/vendor inst#/usr/share/perl6/core ap# nqp# perl5#)

Our CompUnit::Repository::Installation become here inst#/home/raku/.raku, but, what’s more important, the HOME environment variable gets tacked a .raku in the end and an inst# in front, implying that’s the place where Rakudo expects to find it.

This brings us back again to GitHub actions, which change that variable for no good reason, leaving our Rakudo installation effectively unusable. But no fear, a simple environment variable baked in the alpine-raku container (and its GHCR variants) will keep the actual Rakudo installation in check for GitHub actions to come.

Now we’re all set

And we can write our own GitHub actions using this image. Directly run all our stuff inside a container that has Raku. For instance, this way:

name: "Test in a Raku container"
on: [ push, pull_request ]
runs-on: ubuntu-latest
packages: read
image: ghcr.io/jj/raku-zef-gha
name: Checkout
uses: actions/checkout@v2
name: Install modules
run: zef install .
name: Test
run: zef –debug test .
view raw raku-test.yaml hosted with ❤ by GitHub
GHA used in Pod::Load

This is decevingly simply, doing exactly what you would do in your console. Install, and then test, right? That’s it. Underneath, however the fact that the container is using the right UID and Raku knows where to find its own installation despite all the moving and shaking that’s going on is what makes it run.

You can even do a bit more. Use Raku as a shell for running anything. Add this step:

  - name: Use Raku to run
    shell: raku {0}
    run: say $*REPO.repo-chain

And with the shell magic, it will actually run that directly on the Raku interpreter. You can do anything else you want: install stuff, run Cro if you want. All within the GitHub action! For instance, do you want to chart how many files were changed in the latest commits using Raku? Here you go:

name: Install Text::Chart
run: zef install Text::Chart
name: Chart files changed latest commits
shell: raku {0}
run: |
use Text::Chart;
my @changed-files = qx<git log –oneline –shortstat -$COMMITS>
.lines.grep( /file/ )
.map( * ~~ /$<files>=(\d+) \s+ file/ )
.map: +*<files>;
say vertical(
:max( @changed-files[0..*-2].max),
This can be added to the action above

Couple of easy steps: install whatever you need, and then use Text::Chart to chart those files. This needs a bit of explaining, or maybe directly checking the source to have the complete picture: it’s using an environment variable called COMMITS, which is one more than the commits we want to chart, has been used to check out all those commits, and then, of course, we need to pop the last one since it’s a squashed commit that contains all older changes in the repo, uglifying our chart (which we don’t want). Essentially, however, is a pipe that takes the text content of the log that includes the number of file changed, extracts that number via a regex match, and feeds it into the vertical function to create the text chart. Which will show something like this (click on the > sign to show the chart):

Files changed in the last 10 commits in Pod::Load

With thousands of files at your disposal, the sky’s the limit. Do you want to install fez and upload automatically when tagging? Why not? Just do it. Upload your secret, and Bob’s your uncle. Do you want to do some complicated analytics on the source using Raku or generate thumbnails? Go ahead!

Happy packaging!

After this, Santa was incredibly happy, since all his Raku stuff was properly checked, and even containerized if needed! So he sit down to enjoy his corncob pipe, which Meta-Santa brought for him last Christmas.

And, with that, Merry Christmas to all and everyone of you!

Day 23 – The Life of Raku Module Authoring

by Tony O’Dell

Hello, world! This article is a lot about fez and how you can get started writing your first module and making it available to other users. Presuming you have rakudo and zef installed, install fez!

$ zef install fez
===> Searching for: fez
===> Updating fez mirror: https://360.zef.pm/
===> Updated fez mirror: https://360.zef.pm/
===> Testing: fez:ver<32>:auth<zef:tony-o>:api<0>
[fez]   Fez - Raku / Perl6 package utility
[fez]   USAGE
[fez]     fez command [args]
[fez]   COMMANDS
[fez]     register              registers you up for a new account
[fez]     login                 logs you in and saves your key info
[fez]     upload                creates a distribution tarball and uploads
[fez]     meta                  update your public meta info (website, email, name)
[fez]     reset-password        initiates a password reset using the email
[fez]                           that you registered with
[fez]     list                  lists the dists for the currently logged in user
[fez]     remove                removes a dist from the ecosystem (requires fully
[fez]                           qualified dist name, copy from `list` if in doubt)
[fez]     org                   org actions, use `fez org help` for more info
[fez]     FEZ_CONFIG            if you need to modify your config, set this env var
[fez]   CONFIGURATION (using: /home/tonyo/.fez-config.json)
[fez]     Copy this to a cool location and write your own requestors/bundlers or
[fez]     ignore it and use the default curl/wget/git tools for great success.
===> Testing [OK] for fez:ver<32>:auth<zef:tony-o>:api<0>
===> Installing: fez:ver<32>:auth<zef:tony-o>:api<0>

1 bin/ script [fez] installed to:

Make sure that the last line is in your $PATH so the next set of commands all run smoothly. Now we can start writing the actual module, let’s write ROT13 since it’s a fairly easy problem to solve and this article really is less about module content than how to get working with fez.

Writing the Module

Our module directory structure:

├── lib
│   └── ROT13.rakumod
├── t
│   ├── 00-use.rakutest
│   └── 01-tests.rakutest
└── META6.json

lib is the main content of your module, it’s where all of your module’s utilities, helpers, and organization happens. Each file corresponds to one or more modules or classes, more on in the META6.json paragraph below.

META6.json is how zef knows what the module is, it’s how fez knows what it’s uploading, and it’s how rakudo knows how to load what and from where. Let’s take a look at the structure of META6.json:

t contains all of your module’s tests. If you have “author only” tests then you’d also have a directory xt and that directory works roughly the same. For your users’ sanity WRITE TESTS!

  "name": "ROT13",
  "auth": "zef:tony-o",
  "version": "0.0.1",
  "api": 0,

  "provides": {
    "ROT13": "lib/ROT13.rakumod"

  "depends":       [],
  "build-depends": [],
  "test-depends":  [],

  "tags":        [ "ROT13", "crypto" ],
  "description": "ROT13 everything!"

A quick discussion about dists. A dist is the fully qualified name of your module and it contains the name, auth, and version. It’s how zef can differentiate your ROT13 module from mine. It works in conjunction with use, such as use ROT13:auth<zef:tony-o>, and in zef: zef install ROT13:auth<tony-o>:ver<0.0.1>. The dist string is always qualified with both the :auth and the :ver internally to raku and the ecosystem, but the end user isn’t required to type the fully qualified dist if they’re less picky about what version/author of the module they’d like. In use statements you can combine auth and ver to get the author or version you’re expecting or you can omit one or both.

It’s better practice to fully qualify your use statements; as more modules hit the ecosystem with the same name, this practice will help keep your modules running smoothly.

  • name: this is the name of the module and becomes part of your dist, it’s what is referenced when your consumers type zef install ROT13.
  • auth: this is how the ecosystem knows who the author is. On fez this is strict, no other rakudo ecosystem guarantees this matches the uploader’s username.
  • version: version must be unique to the auth and name. For instance, you can’t upload two dists with the value of ROT13:auth<zef:tony-o>:ver<0.0.1>.
  • provides: in provides is the key/value pairs of module and class names to which file they belong to. If you have two modules in one file then you should have the same file listed twice with the key for each being each class/module name. All .rakumod files in lib should be in the META6.json file. The keys here are how rakudo knows which file to look for your class/module in.
  • depends: a list of your runtime depencies

Let’s whip up a quick ROT13 module, in lib/ROT13.rakumod dump the following

unit module ROT13;

sub rot13(Str() $text) is export {

Great, you can test it now (from the root of your module directory) with raku -I. -e 'use ROT13; say rot13("hello, WoRlD!");. You should get output of uryyb, JbEyQ!.

Now fill in your test files and run the tests with zef test .

Publishing Your Module


If you’re not registered with fez, now’s the time!

$ fez register
>>= Email: omitted@somewhere.com
>>= Username: tony-o
>>= Password:
>>= Registration successful, requesting auth key
>>= Username: tony-o
>>= Password:
>>= Login successful, you can now upload dists

Check Yourself

$ fez checkbuild
>>= Inspecting ./META6.json
>>= meta<provides> looks OK
>>= meta<resources> looks OK
>>= ROT13:ver<0.0.1>:auth<zef:tony-o> looks OK

Oh snap, we’re lookin’ good!


$ fez upload
>>= Hey! You did it! Your dist will be indexed shortly.

Only thing to note here is that if there’s a problem indexing your module then you’ll receive an email with the gripe.

Further Reading

You can read more about fez here:

Perhaps you’d prefer listening:

That’s it! If there’s other things you’d like to know about fez, zef, or ecosystems then send tony-o some chat in IRC or an email!

Day 22 – Santa Claus is Rakuing Along

Part 4 – The Santa Claus Reports


A Christmas ditty sung to the tune of Santa Claus is Coming to Town:

 He’s making a list,
 He’s checking it closely,
 He’s gonna find out who’s Rakuing mostly,
 Santa Claus is Rakuing along.

Santa Claus Operations Update 2021

Part 1 of this article reported on the new journaling process for Santa’s employees and how they keep their individual work logs. Part 2 was a side step to show how to better manage Santa’s code by using the new Zef module repository. Part 3 was another side step because Santa was running out of time.

This article, written by yours truly, junior Elf Rodney, will attempt to showcase the use of Raku’s built-in date, time, and sorting functions along with the ease of class construction to handle the new journals in the aggregate to automate many of the reports that used to take goodly amounts of time. They can now be prepared quickly and thus free resources to research more deeply-observed problem areas.

The Reporting System

The journals are frequently read and manipulated by system-wide programs (most found in the Raku module SantaClaus::Utils) run by root. Individual journals are periodically shortened by extracting older entries which are then concatenated onto hidden .journal.YYYY.MM files (owned by root but readable by all) in the user’s home directory.

The data in the journal files are converted to class instances which are deployed for the moment in two global Raku hashes keyed by task-id and user-id, respectively. (When the new persistent types in Raku are ready, they will be a natural fit to form a large on-disk database).

Before designing classes to use with the journals let’s take a quick look at how we want the data to be accessed and used.

First, the raw data give us, for each user and his assigned task (which may be a sub-task):

  • start time on the task
  • process time
    • one or more reports between start and finish
  • end time of the task
  • notes for each entry

Second, the raw data give us, for each task and sub-task

  • earliest start time
  • latest end time
  • total employee time expended on the task
  • list of employees working on each task and time each expends on the task

It seems that the data we have so far collected don’t yield the task/sub-task relations, but that is solved with a task-id system designed with that in mind. As a start, the task-id is a two-field number with the first field being the task number and the second field being the sub-task number. Supervisors will normally use the task number and their subordinates the sub-task number.

For example, a task number might be 103458 with sub-tasks of 200 and 202. The three numbers entered by the different employees working them would enter:

  • Supervisor: 103458-000
  • Employee A: 103458-200
  • Employee B: 103458-202

The final system could be as detailed as desired, but the two-level task-id is sufficient for now.

[Sorry, this article will  be finished later–I am needed for last minute jobs in the factory!]


Santa now has a personnel and task reporting system that automatically produces continuously updated reports on current tasks and the personnel resources used for them. Raku’s built-in date, time, and sorting functions help ease the work of the IT department in their job of programming and maintaining the system.

Santa’s Epilogue

Don’t forget the “reason for the season:” ✝

As I always end these jottings, in the words of Charles Dickens’ Tiny Tim, “may God bless Us , Every one! [1]”


  1. A Christmas Carol, a short story by Charles Dickens (1812-1870), a well-known and popular Victorian author whose many works include The Pickwick Papers, Oliver Twist, David Copperfield, Bleak House, Great Expectations, and A Tale of Two Cities.

Day 21 – Santa Claus is Rakuing Along

Part 3 – Santa Takes a Break


A Christmas ditty sung to the tune of Santa Claus is Coming to Town:

He’s making a list,
He’s checking it closely,
He’s gonna find out who’s Rakuing mostly,
Santa Claus is Rakuing along.

Santa Claus Operations Update 3, 2021

Santa was tired. He wanted to brag about his new reporting and analysis tools as he had planned, but he knew he had to rest up for his big day coming up soon (sorry, my friend Juan Merelo, for the quick shift!).

He was sure that boys and girls around the world would be anxiously awaiting his visit, but he also realized he was just a stand-in for the real gift of Christmas annually celebrated during the Christian Advent season. He knew the First Sunday of Advent was related to Christmas in a way that varied year-to-year somewhat like Easter, which needs a complex algorithm to calculate its actual calendar date, but more regular with just a simple rule to follow. (Actually, he knew there are at least three rules that could be applied, but each rule delivering the correct result.)

He thought it would be relaxing to see how Raku’s powerful, built-in Date system could be applied to the task.

He started by writing down the three rules he knew:

  1. Find the Sunday closest to November 30 (The Feast of St. Andrew), either before or after. If November 30th is a Sunday, then that’s First Advent and St. Andrew gets moved.
  2. Find the Sunday following the last Thursday in November.
  3. Find the 4th Sunday before Christmas, not counting the Sunday which may be Christmas.

“Those look pretty easy to implement,” he thought to himself, “let’s see what I can do without calling in the experts over in IT!

“The Date object should make the job easy enough. Let’s try the first method.

my $d = Date.new($y, 11, 30); # Feast of St. Andrew
my $dow = $d.day-of-week; # 1..7 (Mon..Sun)
# sun mon tue wed thu fri sat sun
# 7 1 2 3 4 5 6 7
# 0 1 2 3 -3 -2 -1 0
if $dow == 7 {
# bingo!
   return $d
elsif $dow < 4 {
# closest to previous Sunday
   return $d $dow
else {
# closest to following Sunday
   return $d + (7 $dow)

“Now the second method.

my $d = Date.new($y, 11, 30); # last day of November
my $dow = $d.day-of-week;
while $dow != 4 {
   $d -= 1;
  $dow = $d.day-of-week;
# found last Thursday in November
# following Sunday is 3 days hence
$d += 3

“And finally, the third method.

my $d = Date.new($y, 12, 25); # Christmas
my $dow = $d.day-of-week;
if $dow == 7 {
# Christmas is on Sunday, count 28 days back.
   return $d 28
else {
# find prev Sunday, count 21 days back from that
# sun mon tue wed thu fri sat sun
# 7 1 2 3 4 5 6 7
# 0 1 2 3 -3 -2 -1 0
   return $d $dow 21

“Which method should I choose? They all work properly as I know from running each against a set of data collected from several sources. I know, I should choose which one is fastest since it will probably be part of a calendar creation module someday!

“If I were really serious I would run them using Raku’s Telemetry class, but I’ll leave that to the experts. But, what I can do is run each over many iterations and measure elapsed time using the Raku GNU::Time module, then compare results and then judge the best.

Santa started to work on speed testing but soon discovered the author of GNU::Time didn’t give a good example of how to use it for this situation and he didn’t have time to experiment any more–he knew from experience that programming in Raku is addictive (just ask Mrs. Claus!), and he couldn’t take a chance on being late for his date for Christmas. So he did the next best thing: he filed an issue with GNU::Time.

He concluded that, since he couldn’t easily determine a clear winner, he would select method 3 since it seemed to him to be the most elegant of the three, and the simplest. After all, TIMTOWTDI!


Raku has numerous capabilities that endear it to even novice programmers, but for those who have to pay attention to time and dates, its Date and DateTime classes take the Christmas Pudding!

Note: See new Raku modules Date::Christian::Advent and Date::Easter. Both use App::Mi6 for management and use the new, and recommended, Zef module repository.

Santa’s Epilogue

Don’t forget the “reason for the season:” ✝

As I always end these jottings, in the words of Charles Dickens’ Tiny Tim, “may God bless Us , Every one!” [1]


  1. A Christmas Carol, a short story by Charles Dickens (1812-1870), a well-known and popular Victorian author whose many works include The Pickwick Papers, Oliver Twist, David Copperfield, Bleak House, Great Expectations, and A Tale of Two Cities.

Day 20 – Create beautiful text charts

Santa got his weekly gift-wrapping report from the Gift Wrapping department. It contained lots of numbers

1 3 7 8 5 3 2 1 3

Every number corresponded to the gifts wrapped by every elf in the department in alphabetical order, starting with Alabaster Snowball, and continuing with Bushy Evergreen. But numbers don’t sing, and that made Santa not sing either. A simple way to check visually what was going on was needed.

Simple, text charts

The Unix philosophy is doing one thing, and doing it well. Small utilities, with just a few lines of code and no dependencies, are easy to build upon, understand, modify, whatever.

They are also good to learn a language. Text::Chart was created six years ago almost to the day, also slightly one year after Raku, then called Perl 6, was officially born with its Christmas release. I didn’t know too much about the thing, and even less about how to release a module, but I tried and do it anyway. Essentially, it’s a single function:

unit module Text::Chart;
constant $default-char is export = "";
sub vertical ( Int :$max = 10,
Str :$chart-chars = $default-char,
*@data ) is export {
my $space = " ";
my @chars = $chart-chars.comb;
my $chart;
for $max^...0 -> $i {
for 0..^@data.elems -> $j {
$chart ~= @data[$j] > $i ?? @chars[$j % @chars.elems] !!
$chart ~= "\n";
return $chart;
view raw text-char.raku hosted with ❤ by GitHub

Uses a default block character to build the bars that compose the chart, and then defines a function that takes a maximum value (arbitrarily set by default to 10), a set of chars to build the bars, and then a couple of nested loops (which originally even used loop) that, line by line and starting from the top, build the chart. There’s no high magic, nothing fancy. And many errors, some of them I only discovered when I was writing this article.

To help and use it directly, a command line script is installed along with the module

Santa builds himself a chart

Using the downloaded module, Santa types:

raku-text-chart 1 3 7 8 5 3 2 1 3


The first and next-to-last elf are going to fail their next review

Hey, not perfect, but at least it’s clear by how much the fourth elf, Shinny Upatree, outwraps the others in the bunch.

However, this reminded him of something. What if…?

use Text::Chart;
my @data = < 1 2 3 4 5 6 7 6 5 4 3 2 1 >;
my $midway = (@data.elems/2).truncate;
my $max = max(@data);
my &left-pad = { " " x $midway ~ $_ ~ "\n"};
say left-pad("") ~ vertical( :$max, @data ) ~ left-pad("") x 2;
view raw xmas-tree.raku hosted with ❤ by GitHub

We can’t help but use a leftpad, right? And mightily useful here. We get this:

One Christmas tree…

Not as nice as the magic tree, but as useful as my friend Tom Browder post to wish y’all a merry Christmas!

Day 19 – Let it Cro

Ah, advent. That time of year when the shops are filled with the sound of Christmas songs – largely, the very same ones they played when I was a kid. They’re a bit corny, but the familiarity is somehow reassuring. And what better way to learn about this year’s new Cro features than through the words of the Christmas hits?

Dashing through the Cro

The Cro HTTP client scores decently well on flexibility. The asynchronous API fits neatly with Raku’s concurrency features, and from the start it’s been possible to get the headers as soon as they arrive, then choose how to get the body – including obtaining it as a Supply of Blobs, which is ideal for dealing with large downloads.

Which is all well and good, but a lot of the time, we just want to get the request body, automatically deserialized according to the content type. This used to be a bit tedious, at least by Raku standards:

my $response = await Cro::HTTP::Client.get:
my $body = await $response.body;

Now there’s a get-body method, shortening it to:

my $response = await Cro::HTTP::Client.get-body:

There’s post-bodyput-body and friends to go with it too. (Is there a head-body? Of course not, silly. Ease off the eggnog.)

Oh I wish it could TCP_NODELAY

Cro offers improved latency out of the box for thanks to setting TCP_NODELAY automatically on sockets. This disables Nagle’s algorithm, which reduces the network traffic of applications that do many small writes by collecting multiple writes together to send at once. This makes sense in some situations, but less so in the typical web application, where the resulting increased latency of HTTP responses or WebSocket messages can make a web application feel a little less responsive.

I saw mummy serving a resource

Cro has long made it easy to serve up static files on disk:

my $app = route {
    get -> {
        static 'content/index.html';

Which is fine in many situations, but not so convenient for those who would like to distribute their applications in the module ecosystem, or to have them installable with tools like zef. In these situations, one should supply static content as resources. Serving those with Cro was, alas, less convenient than serving files on disk.

Thankfully, that’s no longer the case. Within a route block, we first need to associate it with the distribution resources using resources-from. Then it is possible to use resource in the very same way as static.

my $app = route {
    resources-from %?RESOURCES;

    # It works with exact resources
    get -> {
        resource 'content/index.html';

    # And also with path segments
    get -> 'css', *@path {
        resource 'content/css', @path;

We’ve also made it possible to serve templates from resources; simply call templates-from-resources, and then use template as usual. See the documentation for details.

Last Christmas I gave you Cro parts

And the very next day, you made a template. And oh, was it more convenient than before. In many applications pages have common elements: an indication of what is in the shopping cart, or the username of the currently logged in user. Previously, Cro left you to pass this into every template. Typically one would write sub kind of sub to envelope the main content and include the other data:

sub shop($db) {
    my $app = route {
        sub env($session, $body) {
            %( :user($session.user), :basket($session.basket), :$body )

        get -> MySession $session, 'product', $id {
            with $db.get-product($id) -> $product {
                template 'product.crotmp', env($session, $product);
            else {

This works, but gets a bit tedious – not only here, but also inside of the template, where the envelope has to be unpacked, values passed into the layout, and so forth.

<:use 'layout.crotmp'>
<|layout(.body, .basket)>

Template parts improve the situation. Inside of the route block, we use template-part to provide the data for a particular “part”. This can, like a route handler, optionally receive the session or user object.

sub shop($db) {
    my $app = route {
        template-part 'basket', -> MySession $session {

        template-part 'user', -> MySession $session {

        get -> MySession $session, 'product', $id {
            with $db.get-product($id) -> $product {
                template 'product.crotmp', $product;
            else {

Page templates are now simpler, since they don’t have to pass along the content for common page elements:

<:use 'layout.crotmp'>

Meanwhile, in the layout, we can obtain the part data and use it:

<:macro layout>
        <div class="basket">
          <:part basket($basket)>
              <$basket.items> items worth <$basket.value> EUR

The special MAIN part can be used to obtain the top-level object passed to the template, which provides an alternative to the topic approach. One can provide multiple arguments for the MAIN part (or any other part) by using a Capture literal:

sub shop($db) {
    my $app = route {
        get -> MySession $session {
            my $categories = $db.get-categories;
            my $offers = $db.get-offers;
            template 'shop-start.crotmp', \($categories, $offers);

The template would then look like this:

<:part MAIN($categories, $offers)>

The Comma IDE already knows about this new feature, and allows navigation between the part usage in a template and the part data provider in the route block.

While shepherds watched their docs by night

A further annoyance when working with Cro templates was the caching of their compilation. Which this is a fantastic optimization when the application is running in production – the template does not have to be parsed each time – it meant that one had to restart the application to test out template changes. While the cro development tool would automate the restarts, it was still a slower development experience than would have been ideal.

Now, setting CRO_DEV=1 in the environment will invalidate the compilation of templates that change, meaning that changes to templates will be available without a restart.

The proxy and the IP

In many situations we might wish for your application to handle a HTTP request by forwarding it to another HTTP server, possibly tweaking the headers and/or body in one or both directions. For example, we might have several services that are internal to our application, and wish to expose them through a single facade that also does things like rate limiting.

Suppose we have a payments service and a bookings service, and wish to expose them in a single facade service. We could do it like this:

my $app = route {
    # /payments/foo proxied to https://payments-service/foo
    delegate <payments *> => Cro::HTTP::ReverseProxy.new:
        to => 'https://payments-service/';

    # /bookings/foo proxied to https://bookings-service/foo
    delegate <bookings *> => Cro::HTTP::ReverseProxy.new:
        to => 'https://bookings-service/';

This is just scratching the surface of the many features of Cro::HTTP::ReverseProxy. Have fun!

God REST ye merry gentlemen

Whether you’re building REST services, HTTP APIs, server-side web applications, reactive web applications using WebSockets, or something else, we hope this year’s Cro improvements will make your Raku development merry and bright.

Day 18 – Santa and the Magic Tree (M-Tree)

It was Christmas Eve in the Workhouse and Santa was getting worried about how the weight of all those presents would reduce the height of his sled’s flighpath. Would he be able to clear the height of the tallest Christmas trees on his worldwide journey?

He asked one of his helpers to whip up a quick script to see how much danger he would be in and in stepped p6elf to do the job.

Naturally p6elf (pronounced “Physics Elf”) wanted to show off his raku skills to grow his reputation with Santa, the reindeer and all the other elves (knowing how much Santa is into the whole raku thing). So he started by picking up two modules and mixing them together. To share with the others so that they could see how they could easily make their own models so he used Brian Duggan’s cool Jupyter notebook module.

#This setup cell inherits from the Math::Polygons classes such as Point and Triangle provided by Math::Polygons and overrides their plain Numeric variables with Physics::Measure classes of Length and Area.
use Math::Polygons;
use Physics::Measure :ALL;
$Physics::Measure::round-val = 1;
class M-Point is Point {
has Length $.x;
has Length $.y;
class M-Polygon is Polygon {
has M-Point @.points;
class M-Rectangle is Rectangle {
has M-Point $.origin;
has Length $.width;
has Length $.height;
method area( --> Area ) {
$!height * $!width
class M-Triangle is Triangle {
has M-Point $.apex is required;
has Length $.side is required;
method height( --> Length ) {
sqrt($!side**2 - ($!side/2)**2)
method area( --> Area ) {
( $.height * $!side ) / 2

That was quick, he thought, the raku OO approach really is cool and concise and just seamlessly applies my classes as types to check the correctness of my work.

p6elf then went back on the keys to use his new classes and get to a model tree…

my $tri1 = M-Triangle.new(stroke => "green", fill => "green",
apex => M-Point.new(100m, 50m),
side => 50m,
my $tri2 = M-Triangle.new(stroke => "green", fill => "green",
apex => M-Point.new(100m, 75m),
side => 75m,
my $tri3 = M-Triangle.new(stroke => "green", fill => "green",
apex => M-Point.new(100m, 100m),
side => 100m,
my $rect = M-Rectangle.new(stroke => "brown", fill => "brown",
origin => M-Point.new(90m, 185m),
width => 20m,
height => 40m,
my @elements = [ $tri1, $tri2, $tri3, $rect ];
say "Tree Height is ", [+] @elements.map(*.height);
say "Tree Area is ", [+] @elements.map(*.area);
my $tree = Group.new( :@elements );
my $drawing = Drawing.new( elements => $tree );

Wowee, look how I can just type in 100m and the Physics::Measure postfix<m> operator magically makes a Length object … no need to repetitively type in my Length $d = Length.new(value => 100, units => ‘m’); every time (provided I have an SI unit / SI prefix such as cm, kg, ml and so on). And, like magic, a beautiful Xmas tree appeared on the screen.

Then p6elf realised his mistake. While Santa would need to fly over the towering trees that surround the North Pole – where sizes are measured in metric units, he would also need to deliver many, many presents to the kids in America – and their tree are purchased by the foot. Of course, raku to the rescue – since p6elf was too lazy to retype all the embedded unit values in his first model, he created a Magic Tree (M-Tree) class and parameterized the dimensions of the elements like this:

class M-Tree {
has M-Point $.apex;
has Length $.size;
has M-Triangle $!top;
has M-Triangle $!middle;
has M-Triangle $!bottom;
has M-Rectangle $!base;
method elements {
[ $!top, $!middle, $!bottom, $!base ]
method height( --> Length ) {
[+] $.elements.map(*.height)
method area( --> Area ) {
[+] $.elements.map(*.area)
method TWEAK {
my $stroke := my $fill;
$fill = "green";
#calculate x co-ords relative to top of drawing, according to height
my \x := $!apex.x;
my \s := $!size;
my \p = [ (s / 4) , ( s * 3/8), (s / 2) ];
$!top = M-Triangle.new( :$stroke, :$fill,
apex => M-Point.new(x, p[0]),
side => p[0] );
$!middle = M-Triangle.new( :$stroke, :$fill,
apex => M-Point.new(x, p[1]),
side => p[1] );
$!bottom = M-Triangle.new( :$stroke, :$fill,
apex => M-Point.new(x, p[2]),
side => p[2] );
$fill = "brown";
$!base = M-Rectangle.new( :$stroke, :$fill,
origin => M-Point.new(( 0.9 * x ), (([+] p) - (0.2 * s))),
width => 0.1 * s,
height => 0.2 * s );
#my $size = 200m;
my $size = ♎️'50 ft';
my M-Point $apex .= new(($size / 2), ($size / 4));
my M-Tree $us-tree .= new(:$apex, :$size);
say "Tree Height is {$us-tree.height} (including the base)";
say "Tree Area is {$us-tree.area}";
my $drawing = Drawing.new( elements => $us-tree.elements );

Look how cool programming is, I can capture the shape of my object and just need to set the control dimension $size in one place. Instead of 200m via the postfix syntax,I can use the libra emoji prefix<♎️> that uses powerful raku Grammars to read the units and automatically convert. Let’s take a look at the result:

Phew, no need to worry, Santa can easily get over these smaller forests even when the reindeer are tired and low on energy…

… energy – hmmm maybe I can use raku Physics::Measure to work out how much energy we need to load up in terajoules(TJ) and then convert that to calories to work out how much feed Rudy and the team will need … mused p6elf in a minced pie induced dream.


With inspiration from: Jonathan Stowe’s perl6 advent calendar Christmas Tree and Codesections’ Learn Raku With: HTML Balls

Day 17 – Generic data structure traversals with roles and introspection

Generic datastructure traversals with roles and introspection

I am a lambdacamel and therefore I like to adapt concepts and techniques from functional programming, and in particular from the Haskell language, to Raku. One of the techniques that I use a lot is generic traversals, also known as “Scrap Your Boilerplate” after the title of the paper by Simon Peyton Jones and Ralf Lämmel that introduced this approach. In their words:

Many programs traverse data structures built from rich mutually-recursive data types. Such programs often have a great deal of “boilerplate” code that simply walks the structure, hiding a small amount of “real” code that constitutes the reason for the traversal. ”Generic programming” is the umbrella term to describe a wide variety of programming technology directed at this problem.

So to save you having to write your own custom traversal, this approach gives you generic functions that do traversals on arbitrary data strucures. In this article, I will explain how you can easily implement such generics in Raku for arbitrary role-based datastructures. There is no Haskell in this article.

Roles as datatypes by example

I implemented of these generics for use with role-based datatypes. Raku’s parameterised roles make creating complex datastructures very easy. I use the roles purely as datatypes, so they have no associated methods.

For example, here is an example code snippet in a little language that I use in my research.

map (f1 . f2) (map g (zipt (v1,map h v2)))

The primitives are map, . (function composition), zipt and the tuple (...), and the names of functions and vectord. The datatype for the abstract syntax of this little language is called Expr and looks as follows:

# Any expression in the language
role Expr {}
# map f v
role MapV[Expr \f_,Expr \v_] does Expr {
    has Expr $.f = f_;
    has Expr $.v = v_;
# function composition f . g
role Comp[Expr \f_, Expr \g_] does Expr {
    has Expr $.f = f_;
    has Expr $.g = g_;
# zipt t turns a tuple of vectors into a vector of tuples
role ZipT[Expr \t_] does Expr {
    has Expr $.t = t_
# tuples are just arrays of Expr
role Tuple[Array[Expr] \e_] does Expr {
    has Array[Expr] $.e = e_
# names of functions and vectors are just string constants
role Name[Str \n_] does Expr {
    has Str $.n = n_

The Expr role is the toplevel datatype. It is empty because it is implemented entirely in terms of the other roles, which thanks to the does are all of type Expr. And most of the roles have attributes that are also of type Expr. So we have a recursive datatype, a tree with the Name node as leaves.

We can now write the abstract syntax tree (AST) of the example code using this Expr datatype:

my \ast = MapV[ 

The typical way to work with such a datastructure is using a given/when:

sub worker(Expr \expr) {
    given expr {
        when MapV {...}
        when Comp {...}
        when ZipT {...}

Alternatively, you can use a multi sub:

multi sub worker(Mapv \expr) {...}
multi sub worker(Comp \expr) {...}
multi sub worker(ZipT \expr) {...}

In both cases, we use the roles as the types to match against for the actions we want to take.

(For more details about algebraic datatypes see my earlier article Roles as Algebraic Data Types in Raku.)


If I want to traverse the AST above, what I would normally do is write a worker as above, where for every node except the leaf nodes, I would call the worker recursively, for example:

sub worker(Expr \expr) {
    given expr {
        when MapV {
            \my f_ = worker(expr.f);
            \my v_ = worker(expr.v);

But wouldn’t it be nice if I did not have to write that code at all? Enter generics.

I base my naming and function arguments on that of the Haskell library Data.Generics. It provides many schemes for traversals, but the most important ones are everything and everywhere.

  • everything is a function which takes a datastructure, a matching function, an accumulator and an update function for the accumulator. The matching function defines what you are looking for in the datastructure. The result is put into the accumulator using the update function.

    sub everything(
        Any \datastructure, 
        Any \accumulator, 
        --> Any){...}
  • everywhere is a function which takes a datastructure and a modifier function. The modifier function defines which parts of the datastructure you want to modify. The result of the traversal is a modified version of the datastructure.

    sub everywhere(
        Any \datastructure, 
        --> Any){...}

The most common case for the accumulator is to use a list, so the updated function appends lists to the accumulator:

sub append(\acc, \res) {
    return (|acc, |res);

As an example of a matching function, let’s for example find all the function and vector names in our AST above:

sub matcher(\expr) {
    given expr {
        when Name {
            return [expr.n]
    return []

So if we find a Name node, we return its n attribute as a single-element list; otherwise we return an empty list.

my \names = everything(ast,[],&append,&matcher); 
# => returns (f1 f2 g h v1 v2)

Or let’s say we want to change the names in this AST:

sub modifier(\t) {
    given t {
        when Name {
        default {t}

my \ast_ = everywhere(ast,&modfier); 
# => returns the AST with all names appended with "_updated"

Implementing Generics

So how do we implement these magic everything and everywhere functions? The problem to solve is that we want to iterate through the attributes of every role without having to name it. The solution for this is to use Raku’s Metaobject protocol (MOP) for introspection. In practice, we use the Rakudo-specific Metamodel. We need only three methods: attribute, get_value and set_value. With these, we can iterate through the attributes and visit them recursively.

Attributes can be $, @ or % (and even & but I will skip this). What this means in terms of Raku’s type system is that they can be scalar, Iterable or Associative, and we need to distinguish these cases. With that, we can write everything as follows:

sub everything (\t, \acc,&update,&match) {
    # Arguments a immutable, so copy to $acc_
    my $acc_ = acc;
    # Match and update $acc_
    $acc_ =update($acc_,match(t));
    # Test the attribute type
    if t ~~ Associative {
        # Iterate over the values
        for t.values -> \t_elt  {
            $acc_ = everything(t_elt,$acc_,&update,&match)
        return $acc_; 
    elsif t ~~ Iterable {
        # Iterate
        for |t -> \t_elt  {
            $acc_ = everything(t_elt,$acc_,&update,&match)
        return $acc_; 

    else { 
        # Go through all attributes
        for t.^attributes -> \attr {
            # Not everyting return by ^attributes 
            # is of type Attribute
            if attr ~~ Attribute {
                # Get the attribute value
                my \expr = attr.get_value(t);
                if not expr ~~ Any  { # for ContainerDescriptor::Untyped
                    return $acc_;
                # Descend into this expression
                $acc_ = everything(expr,$acc_,&update, &match);
    return $acc_

So what we do here essentially is:

  • for @ and % we iterate through the values
  • iterate through the attributes using ^attributes
  • for each attribute, get the expression using get_value
  • call everything on that expression
  • the first thing everything does is update the accumulator

everywhere is similar:

sub everywhere (\t_,&modifier) {
    # Modify the node
    my \t = modifier(t_);
    # Test the type for Iterable or Associative
    if t ~~ Associative {
        # Build the updated map
        my %t_;
        for t.keys -> \t_k  {
            my \t_v = t{t_k};
            %t_{t_k} = everywhere (t_v,&modifier);
        return %t_; 
    elsif t ~~ Iterable {
        # Build the updated list
        my @t_=[];
        for |t -> \t_elt  {
            @t_.push( everywhere(t_elt,&modifier) );
        return @t_; 

    else {
        # t is immutable so copyto $t_
        my $t_ = t;
        for t.^attributes -> \attr {            
            if attr ~~ Attribute {
                my \expr = attr.get_value(t);
                if not expr ~~ Any  { # for ContainerDescriptor::Untyped
                    return $t_;
                my \expr_ = everywhere(expr,&modifier);                
        return $t_;
    return t;

So what we do here essentially is:

  • for @ and % we iterate through the values
  • iterate through the attributes using ^attributes
  • for each attribute, get the expression using get_value
  • call everywhere on that expression
  • update the attribute using set_value

This works without roles too

First of all, the above works for classes too, because the Metamodel methods are not specific to roles. Furthermore, because we test for @ and %, the generics above work just fine for data structures without roles, built from hashes and arrays:

my \lst = [1,[2,3,4,[5,6,7]],[8,9,[10,11,[12]]]];

sub matcher (\expr) {
    given expr {
        when List {
            if expr[0] % 2 == 0 {                
                    return [expr]                
    return []

my \res = everything(lst,[],&append,matcher);
say res;
# ([2 3 4 [5 6 7]] [8 9 [10 11 [12]]] [10 11 [12]] [12])

Or for hashes:

my %hsh = 
    a => {
        b => {
            c => 1,
            a => {
                b =>1,c=>2
        c => {
            a =>3
    b => 4,
    c => {d=>5,e=>6}

sub hmatcher (\expr) {
    given (expr) {
        when Map {
            my $acc=[];
            for expr.keys -> \k {                
                if k eq 'a' {
            return $acc;
    return []

my \hres = everything(%hsh,[],&append,&hmatcher);
say hres;
# ({b => {a => {b => 1, c => 2}, c => 1}, c => {a => 3}} {b => 1, c => 2} 3)


Generic datastructure traversals are a great way to reduce boilerplate code and focus on the actual purpose of the traversals. And now you can have them in Raku too. I have shown the implementation for the two main schemes everything and everywhere and shown that they work for role based datastructures as well as traditional hash- or array-based datastructures.