# Planet Raku

Roman Baumer (Freenode: rba #raku or ##raku-infra) / 2021-11-29T11:19:16

## gfldex: Leaky Rakudo

Yesterday the discord-bridge-bot refused to perform its 2nd job: EVAL All The Things! The EVALing is done via shell-out and requires a fair bit of RAM (Rakudo is equally slim then Santa). After about 3 weeks the fairly simple bot had grown from about halve a GB to one and a halve – while mostly waiting for the intertubes to deliver small pieces of text. I complained on IRC and was advised to take heap snapshots. Since I didn’t know how to make heaps of snapshots, I had to take timo’s directions towards use Telemetry. As snap(:heap) wasn’t willing to surrender the filename of the snapshot (I want to compress the file, it is going to get big over time) I had a look at the source. I also requested a change to Rakudo so I don’t have to hunt down the filename, which was fulfilled by lizmat 22 minutes later. Since you may not have a very recent Rakudo, the following snipped might be useful.

    multi sub postfix:<minute>(Numeric() \seconds) { seconds * 60 }
multi sub postfix:<minutes>(Numeric() \seconds) { seconds * 60 }
multi sub postfix:<hour>(Numeric() \seconds) { seconds * 3600 }
multi sub postfix:<hours>(Numeric() \seconds) { seconds * 3600 }
multi sub postfix:<day>(Numeric() \seconds) { seconds * 86400 }
multi sub postfix:<days>(Numeric() \seconds) { seconds * 86400 }

start react whenever Supply.interval(1day) {
note ‚taking snapshot‘;
use Perl6::Compiler:from<NQP>;
sub compress(Str() $file-name) { run «lz4 -BD --rm -qf$file-name» }

my $filename = 'raku-bot-' ~ now.DateTime.&{ .yyyy-mm-dd ~ '-' ~ .hh-mm-ss } ~ '.mvmheap'; Perl6::Compiler.profiler-snapshot(kind => "heap", filename =>$filename<>);
WORKDIR $DIR # Change to non-privileged user USER raku # Will run this ENTRYPOINT ["/home/raku/test.sh"] Here’s my CLI incantation to do the build: git clone https://github.com/p6steve/raku-Physics-Unit.git sudo docker build -t arp-unit-deps . sudo docker run -t -v /home/ubuntu/raku-Physics-Unit:/test arp-unit-deps --verbose sudo docker tag arp-unit-deps p6steve/alpine-raku-physics:arp-unit-deps sudo docker push p6steve/alpine-raku-physics:arp-unit-deps And to run it on the macOS Docker Desktop client: docker pull p6steve/alpine-raku-physics:arp-unit-deps docker run -t -v //path-to/raku-Physics-Measure:/test p6steve/alpine-raku-physics:arp-unit-deps --verbose Here’s the list of images that I built – in each case, the image is built with the module dependencies since it is used for faster testing of the module… • raku-Physics-Units (ie Physics-Units deps) • raku-Physics-Measure (ie Physics-Measure deps) • raku-Physics-Constants (ie Physics-Constants deps) • raku-Physics (full install for downstream code) These are all now built and pushed to https://hub.docker.com/r/p6steve/alpine-raku-physics – enjoy! ## Vector Two: Dump Docker Desktop So – this has proven the point and determined a general need for a clean ubuntu build environment for adjustments to my images, I felt that I was limited by going to AWS and a paid build for this step and still felt unnecessarily hemmed in and slowed down in by the macOS Docker Desktop confection. A bit more research turned up vftools … and this excellent recipe from Jan Mikeš… take a look to see how Jan describes 40x load speed improvements (from 160s to 3.7s). Here’s what it has provided for me (ymmv): • VM in ~/ubuntu-vm • Docker installed on VM • VM hostname is ubuntu • You can ssh [email protected] without password • You can ssh ubuntu without password • Global start-ubuntu command on mac • Backup at ~/ubuntu-vm/disk-backup.img.tar.gz So now I can start my ubuntu-vm with docker in the morning and reach for something like: docker pull --platform linux/arm64 rakudo-star docker run -it --entrypoint sh -l -c rakudo-star raku Line 2 of this takes about 1s to get a raku REPL prompt. Yessssss! So now I am off to scratch my head with: • What are the docker images I want to compose and build for my dev needs • How to adjust Comma IDE to run / ssh / utilise this CLI Please do leave any feedback and comments (click here & scroll down) … ~p6steve ## Rakudo Weekly News: 2021.47 David H. Adler RIP ### Published by liztormato on 2021-11-23T13:26:44 This week brought the sad news that David H. Adler has passed away. David had many hobbies, in which he all excelled (like knowledge about Monty Python, Doctor Who, really, really bad movies, to name but a few of them). The Perl / Raku community was only one of the communities he was part of. David was involved in the very early Raku development process. More recently, he was involved in documenting various features of the Raku Programming Language. Attending many Perl and related Open Source Conferences, he was a familiar sight and a good person to spend time with (many pictures). He is sorely missed (FaceBook, /r/perl, blogs.perl.org, presentation from 2016). ### Pod Editing Alexandr Zahatski has announced a new version of Podlite, an easy-to-use editor for documenting Raku modules and programs (which even has an online version). New features are autocomplete and snippets (/r/rakulang comments). ### Not so happy on the M1 Steve Roe elaborates on the journey with Raku on their new M1 laptop with MacOS Monterey in “Raku at the Monterey Docks“. This journey not being over just yet, so be sure to tune in for comments / suggestions and a possible follow-up blog post. ### Grant News Jonathan Worthington‘s Grant Proposal to build more optimizations upon the new Raku dispatch mechanism has been approved by the Grant Committee. This ensures at least 200 more hours of Jonathan‘s excellent development work! ### FOSDEM 2022 No news yet from the FOSDEM organizers on the application for a Raku DevRoom at FOSDEM 2022. ### Still coming closer! Only a few weeks to go, and it’s Advent Calendar time again! Make sure you get a slot in this year’s Advent Calendar for the Raku Programming Language, by adding your blog post proposal to the preliminary list of authors and articles! ### Weeklies Weekly Challenge #140 is available for your perusal. ### No November release The combination of still having some known issues resulting from the new-disp work, and the fact that there still is no official Rakudo Release Manager available, has made the core team decide to skip the November Rakudo Release. Instead an early December release is being targeted for the 4th of December 2021. To be picked up by the regular release schedule in January 2022 again. The Rakudo Team is still looking for someone willing and able to take on the role of the release manager of the upcoming MoarVM, NQP and Rakudo releases. You don’t have to be an ace programmer, or have intimate knowledge of the code base. The only thing you need is some time, and maybe some hardware to run tests, and be able to follow the Release Guide. If you are interested in doing this responsible job, please make yourself known on the #raku-dev channel on Libera.chat. Your efforts will be greatly appreciated! ### New Pull Requests ### Core Developments • Stefan Seifert‘s work of the past months regarding JITting of NativeCall based in the new dispatch mechanism, was merged: this made the csv-ip5xs-20 benchmark almost 3x as fast. • Stefan Seifert also fixed an issue with CATCH blocks setting $! when they really shouldn’t.
• Elizabeth Mattijsen added support for specifying an Iterable with IterationBuffer.new, removed support for use experimental :collation and made Date.new(year,month,day) about 40% faster.
• And some smaller tweaks and fixes.

### New Raku modules

• Date::YearDay “Date object by year and day-of-year” by Tom Browder.
• RedFactory “A factory for testing code using Red” by Fernando Correa de Oliveira.

### Winding down

A week with sad news. Check in again next week for more upbeat Rakudo news. And it can’t be said enough: stay healthy and stay safe. See you then!

## The Problem…

There I was just being tidy and getting the latest macOS release (Monterey) for my pricey new M1 laptop – expecting the usual seamless upgrade. Then bang! <<Homebrew is not supported on ARM processors>>. This is NOT (just) a raku thing. In my case, the proximate cause seems to be lack of support for M1 for ruby <<No Homebrew ruby 2.6.8 available for arm64 processors!>>. My swag is that Monterey is stepping down rosetta mollycoddling vs BigSur and wo-betide anyone who wants to run non native via a terminal. Well what did you expect with an architecture change.

CAVEAT

This may well be the way I have set up my setup on my machine – you made never experience any of this. I would also mention that I am not a core dev and I am a bit out on a limb building from source – I can do it fine if it works… and all of what follows has a good measure of uninstall/reinstall which can leave a bit of a mess.

## … gets Worse

This made me mad. So I factory reset my machine, installed Xcode (this is safer than xcode-select --install), rosetta2 (leaving the Terminal.app “open with rosetta 2” option unchecked, since I want to go faster) and Docker Desktop and tried several things on a clean macOS machine. My failings included:

• brew install rakudo (and rakudo-star)
• install prebuilt macOS arm64 image
• rakubrew build from source 2021.04 (locale error) & 2021.07 (make test error)

I would love it if any of these methods would work for me … and encourage you to comment if you have fared better / know the incantation that I missed!

And this is not to cast aspersions on the excellent raku toolchain – I am sure if I wait a couple of months and get off the bleeding edge, then all will be fixed. In the meantime, I realised that macOS is a bit of a sideshow for raku which is developed on ubuntu.

## Back to the Drawing Board (aka Docker Desktop)

Since I have been wondering about the best way to test against multiple raku versions and to maintain a tight system configuration around raku to support Python::Inline and Perl5::Inline, I choose to see the silver lining in my cloud and go for Docker. So I installed the newly GA Docker Desktop for Apple Silicon.

This method is very forgiving – Docker Desktop will warn, then run AMD64 (on rosetta) using platform detection if need be, regardless of the Terminal.app settings.

Now to work out the workload … well I have been working on the various Physics::Measure modules using jnthn’s excellent CommaCP for a while, so I was very keen to get testing working for these.

I also have been using JJ’s excellent alpine-raku and raku-alpine-test on the voracious TravisCI service so it seemed natural to go with that. Here are the steps that worked for me:

### Path 1: Basic Module Test

Starting by running a module test from the Docker command line (raku-test is a specialised derivative, with a Dockerfile that starts FROM alpine-raku:latest):

docker run -t -v /path/to/module-dir:/test jjmerelo/raku-test

This will pull the image from the Docker repo and run it… you just specify the local path to your module.

Nice! I am alive once more with local test harness and the ability to apply multiple raku versions – BUT this is on AMD64 (aka Intel) … so all that money spent on M1 is wasted!

### Path 2: Run any raku script

Stepping this up, I can use alpine-raku to run scripts directly (following the documentation):

docker run –rm -t jjmerelo/alpine-raku -e “mkdir ‘raku-app’; say ‘raku-app’.IO.absolute;”
docker run -t -v pwd:/home/raku/raku-app jjmerelo/alpine-raku /home/raku/raku-app/pell.p 6

### Path 3: Run Math::Polygons via the existing Dockerfile

Since I already had made a Dockerfile to run this module in interactive mode in a Jupiter notebook, I gave it a try …

git clone https://github.com/p6steve/raku-Math-Polygons
cd raku-Math-Polygons
docker build -t rmp .

This takes a while (20mins+) as it does on Jupyter binder since it is building a full raku ubuntu image from scratch – when running you can access via the browser button on Docker Desktop.

## Putting Docker and Comma together

To wrap this story up, how nice it would be to get Docker and Comma working together … lo and behold, CommaIDE has a plugin just for this…

## And Finally!

So, now all is well – Docker Desktop is up and running and I can ring all the changes from the command line:

docker run -it jjmerelo/alpine-raku
docker run -it jjmerelo/alpine-raku:2021.04
docker run -t jjmerelo/alpine-raku -e “say ‘hello þor'”
docker run -it –entrypoint sh -l -c jjmerelo/alpine-raku [container persists eg. zef modules]
docker run -v pwd:/app -it jjmerelo/alpine-raku /app/Tree.p6

BUT – this way I have to install all my module dependencies on every test run … surely there must be a better, faster way… (just wait for the next gripping instalment)

~p6steve

## Rakudo Weekly News: 2021.46 Cro Once Again

The Cro Development team proudly announced version 0.8.7 of Cro, the set of Raku libraries for building reactive distributed systems, lovingly crafted to take advantage of all Raku has to offer. Sites such as raku.land and the IRC logs server beta run Cro in production. Check out all the fixes, improvements and new features such as async reverse proxying and improved warnings from rendering templates with undefined values.

### FOSDEM 2022

Andrew Shitov has applied for a Raku DevRoom at FOSDEM 2022, which will be an online only event. Next week we should know whether it was accepted or not!

### Generating a lot of frames

Andinus has written a blog post about Fornax (a collection of tools to visualize Path Finding Algorithms) called “Generating 4.8 million frames“, as part of a Computer Graphics Bachelor project.

### Binary Regexes

Matthew ‘Stephen’ Stuckwisch was triggered by a question on StackOverflow, to revisit their ideas of what would be needed to allow regexes to be applied to binary data. Which resulted in quite a lot of useful links and discussion!

### Grant Update

Daniel Sockwell has written two updates on the progress of the Persistent Data Structures Grant.

### Data::Record’s Identity Crisis

Ben Davies is soliciting opinions about the future of the Data::Record module in: Annotations for the Complete Type.

### That time of the year is coming closer!

Only a few weeks to go, and it’s Advent Calendar time again! Make sure you get a slot in this year’s Advent Calendar for the Raku Programming Language, by adding your blog post proposal to the preliminary list of authors and articles!

### Weeklies

Weekly Challenge #139 is available for your perusal.

### A new release manager, please

The Rakudo Team is still looking for someone willing and able to take on the role of the release manager of the upcoming MoarVM, NQP and Rakudo releases. You don’t have to be an ace programmer, or have intimate knowledge of the code base. The only thing you need is some time, and maybe some hardware to run tests, and be able to follow the Release Guide.

If you are interested in doing this responsible job, please make yourself known on the #raku-dev channel on Libera.chat. Your efforts will be greatly appreciated!

### Core Developments

• Daniel Green optimised memory usage in some bits of string handling, added JIT support for some not-so often used ops, made some ops inlinable and fixed a race condition in initial access to enums.
• Jonathan Worthington simplified return from frame handling.
• Stefan Seifert fixed a JITted return from a nested runloop, and fixed a race condition on initialization of dynamic variables.
• Christian Bartolomäus fixed an issue with how invalid input is handled on the JVM backend.
• Geoffrey Broadwell added support for Terminal::LineEditor in the REPL.
• Daniel Sockwell improved the consistency of handling attempts to (re-)bind to read-only variables.
• Vadim Belman added the possibility to the .^mro method to request parametric or concrete roles to be included in the result.
• Timo Paulssen optimized smartmatching on Int, thus speeding up smartmatching on enums with integer values.
• Patrick Böker added checksums for release files.
• And some other smaller tweaks and fixes.

### New Raku modules

• has-word “A quick non-regex word-boundary checker” by Elizabeth Mattijsen.

### Winding down

A little bit of a quiet week, it feels, with only a single new module. But still nice to see that at least 13 people have released updated modules, one of them the new version of Cro! And again a thought provoking discussion this week. Check again next week for more Rakudo news. And it can’t be said enough: stay healthy and stay safe. See you then!

## Rakudo Weekly News: 2021.45 Two Commas

With only being a few hours late to make it to last week’s Rakudo Weekly News, Oleksandr Kyriukhin was nonetheless glad to be able to announce another release of the Comma IDE for subscribers, as well as a new free Comma Community Edition! Check out the changes! And if you don’t know about Comma, check out the FAQ!

### FOSDEM 2022

As was announced a few weeks ago, FOSDEM 2022 will be an online only event. This means that the dev-rooms will also be online. A call for participation has been made (/r/rakulang comments).

### Fedora 35

Claudio Ramirez informs us that all Rakudo packages now also support the just released Fedora 35.

### Wenzel’s Corner

Wenzel produced three blog posts this week:

### Alexey’s Corner

Alexey Melezhik is looking for feedback on the future of #mybfio.

### That time of the year is coming closer!

Only a few weeks to go, and it’s Advent Calendar time again! Make sure you get a slot in this year’s Advent Calendar for the Raku Programming Language, by adding your blog post proposal to the preliminary list of authors and articles!

### On Junctions and Smartmatching

A few months ago Ralph Mellor opened a problem-solving issue about the left side of ~~ with regards to Junctions. This has gotten a lot of discussions on that and related issues in the past week. So this maybe a good time for you to read and give your comments!

### Weeklies

Weekly Challenge #138 is available for your perusal.

### Core Developments

• Zhuomingliang provided a better fix for a GC issue.
• Jonathan Worthington optimized frame allocation, fixed a thinko in the resumption logic and a thread safety issue with intern lookups.
• Stefan Seifert made sure that boolification/intification pairs are eliminated in spesh already, and fixed builds done with --valgrind, a race condition with JIT compiled dispatches, and removed an unnecessary NULL check on a hot code path.
• Nicholas Clark fixed several libffi / NativeCall issues.
• Timo Paulssen optimized a common case of replacing one argument by another in dispatch.
• Elizabeth Mattijsen introduced the RAKUDO_PRECOMPILATION_PROGRESS environment variable, which shows which modules are being (re-)precompiled.
• Ben Davies fixed handling of constrained Mu parameters in signature smartmatching.
• And some other smaller fixes and tweaks.

### New Raku modules

• fornax “Collection of tools to visualize Path Finding Algorithms” by Andinus.
• Rakudo::CORE::META “Provide zef compatible meta-data for Rakudo” by Elizabeth Mattijsen.
• Array::Sorted::Map “Provide a Map interface for 2 sorted lists” by Elizabeth Mattijsen.
• Array::Unsorted::Map “Provide a Map interface for 2 unsorted lists” by Elizabeth Mattijsen.

### Winding down

Cool fixes and improvements deep under the hood. Quite a few new modules and updated modules. Some thought provoking discussions. Not a bad week at all! Pretty sure next week will come up with more Rakudo news. See you then!

## gfldex: 2nd class join

For challenge #137.1 we are looking for long years. We can implement the algorithm as described in Wikipedia (and ignore that Dateish got .week-number to have a reason for showing off with junctions).

multi sub infix:«|,»(\left, \right) is equiv(&infix:<Z>) { |left, |right }

say (1900..2100).grep({ Date.new(.Int, 1, 1).day-of-week | Date.new(.Int, 12, 31).day-of-week == 4 });

# OUTPUT: 1903 1908 1914 1920 1925 1931 1936 1942 1948 1953 1959 1964 1970 1976 1981 1987 1992 1998 2004 2009 2015 2020 2026 2032 2037 2043 2048 2054 2060 2065 2071 2076 2082 2088 2093 2099

The output doesn’t look too nice. It would be better to group the years in column.

put ( (1900..2100).grep({ Date.new(.Int, 1, 1).day-of-week | Date.new(.Int, 12, 31).day-of-week == 4 }) Z (' ' xx 7 |, $?NL) |xx ∞ ).flat.head(*-1).join(''); # OUTPUT: 1903 1908 1914 1920 1925 1931 1936 1942 1948 1953 1959 1964 1970 1976 1981 1987 1992 1998 2004 2009 2015 2020 2026 2032 2037 2043 2048 2054 2060 2065 2071 2076 2082 2088 2093 2099 That is pretty long and convoluted. The reason why I need Z with an infinite list and have to remove the last redundant element (either a newline or space) is that .join isn’t all that smart. Let’s build a smarter function. multi sub smart-join(Seq:D \separator, *@l --> Str:D) { my$ret;
my $sep-it = separator.iterator; my$list-it = @l.iterator;

loop {
my $e :=$list-it.pull-one;
last if $e =:= IterationEnd;$ret ~= $e;$ret ~= $sep-it.pull-one if$list-it.bool-only;
}

$ret } multi sub infix:«|xx»(Mu \left, Mu \right) is equiv(&infix:<xx>) { (left xx right).flat } 1900..2100 ==> grep({ Date.new(.Int, 1, 1).day-of-week | Date.new(.Int, 12, 31).day-of-week == 4 }) ==> smart-join( (' ' xx 7 |,$?NL) |xx ∞ )
==> say();

Now we can use (the sadly under-used) feed operator. The lazy list that is generating the alternating separators might be a bit slow. If we go functional the code, both for smart-join and the alternator, gets simpler.

multi sub smart-join(&separators, *@l --> Str:D) {
my $ret; while @l {$ret ~= @l.shift;
$ret ~= separators if [email protected]; }$ret
}

1900..2100
==> grep({ Date.new(.Int, 1, 1).day-of-week | Date.new(.Int, 12, 31).day-of-week == 4 })
==> smart-join({ ++$%% 8 ??$?NL !! ' '})
==> say();

Right now there is no easy way to add this as a module because sub join is not a multi. Since we use Z with such infinite lists quite often, I believe a proper functional way in CORE could not hurt. Raku does support functional programming in many places but some subs are still 2nd class citizen. It may be reasonable to make a list so I can hand it over to Santa.

### UPDATE:

As lizmat noted, sub join is a multi already. So there is room for a module.

## gfldex: Should it mutate or not? YES!

On Discord Hydrazer was looking for a list concatenation operator. That leaves the question if it should mutate like Array.push or return a new list of Slips.

sub infix:«|<<»(\a, \e) {
Proxy.new(FETCH => method { |a, |e },
STORE => method (\e) {})
but role { method sink { a.push: e } };
}

my @a = 1,2,3;
@a |<< 4;
dd @a;
my @b = @a |<< 5;
dd @a, @b;

In sink-context returning a new list would not make sense. With a Proxy that provides a sink-method we can answer the question with “YES!”.

This made me wonder if Proxy should have the optional argument SINK. Right now there is no way to define containers in pure Raku. Even with subclassing we would have to decent into nqp-land. As newdisp has shown, it tends to be quite brittle to make modules depend on use nqp;.

## gfldex: TIMTOWTDItime

On Discord flirora wished for a way to merge list elements conditional. In this instance the condition is that any element that starts with a space is part of a group.

{
my @a = ("apple", " banana", " peach", "blueberry", "pear", " plum", "kiwi");

multi sub merge-spacy([]) { () }
multi sub merge-spacy([$x is copy, *@xs]) { if @xs[0].?starts-with(' ') {$x ~= @xs.shift;
merge-spacy([|$x, |@xs]) } else {$x, |merge-spacy(@xs)
}
}

dd merge-spacy(@a);
}
# OUTPUT: ("apple banana peach", "blueberry", "pear plum", "kiwi")

This functional version is neat but slow. Rakudo can’t inline recursion and doesn’t do any other optimisations yet.

my @a = ("apple", " banana", " peach", "blueberry", "pear", " plum", "kiwi");

sub merge-with(@a, &c) {
gather while @a.shift -> $e { if @a && &c(@a.head) { @a.unshift($e ~ @a.shift)
} else {
take $e; } } } dd @a.&merge-with(*.starts-with(' ')); # OUTPUT: ("apple banana peach", "blueberry", "pear plum", "kiwi").Seq With gather/take we don’t have to worry about recursion and the returned Seq is lazy be default. This can provide a big win if the list gets big and is not wholly consumed. my @a = ("apple", " banana", " peach", "blueberry", "pear", " plum", "kiwi"); multi sub join(*@a, :&if!) { class :: does Iterable { method iterator { class :: does Iterator { has @.a; has &.if; method pull-one { return IterationEnd unless @!a; my$e = @!a.shift;
return $e unless @!a; while &.if.(@!a.head) {$e ~= @!a.shift;
}

return $e; } }.new(a => @a, if => &if) } }.new } .say for join(@a, if => *.starts-with(' ')); This version should please lizmat as it uses iterators. The conditional is also factored out and CORE will use the Iterator lazily wherever possible. In production code I would get rid of the return-statements and replace them with ternary operators to get a little extra performance. The original question (that clearly got me carried away) asked for the groups to be join. Once we lost a structure it can be difficult to reconstruct it. my @a = ("apple", " banana", " peach", "blueberry", "pear", " plum", "kiwi"); #| &c decides if the group is finished sub group-list(@a, &c) { my @group; gather while @a { my$e = @a.shift;
my $next := [email protected] ?? @a.head !! Nil; @group.push($e);
if !c($e,$next) {
take @group.clone;
@group = ();
}
}
}

dd @a.&group-list(-> $left,$right { $right &&$right.starts-with(' ')});

Here the conditional gets two elements to decide if they belong to the same group. It is also the first time I used .clone.

Thanks to a simple question I learned quite a bit. It forced me to think about the disadvantages of my first idea. Maybe code challenges should explicitly asked for more then one answer for the same question.

## Rakudo Weekly News: 2021.44 1000+ Rakoons

This week the Raku Community on Reddit makes it to main article on the Rakudo Weekly News. In the roughly two years since the rename, the number of subscribers made it to a 1000! Which does not really reach the number of subscribers on the previous reddit just yet, but on the other hand that had been in use for almost 9 years! And to all new Rakoons: welcome to the Raku Programming Language.

### Grants

Jonathan Worthington reports on the completion of the grant for the new Raku dispatch mechanism (/r/rakulang comments) and submitted a grant proposal for building optimizations upon the new Raku dispatch mechanism. Check them out and leave any comments you may have!

### That time of the year is coming!

Only a few weeks to go, and it’s Advent Calendar time again! Make sure you get a slot in this year’s Advent Calendar for the Raku Programming Language, by adding your blog post proposal to the preliminary list of authors and articles!

### Alexey’s Corner

Alexey Melezhik had two announcements about the My Butterflies – Friendly Software Reviews Network: Sticky releases and a weekly Raku update feature.

### Weeklies

Weekly Challenge #137 is available for your perusal.

### Core Developments

• Daniel Green worked on JITting more ops, fixed an issue with the primality of negative numbers and fixed Buf.gist when parameterized with an unsized type.
• Stefan Seifert fixed an issue when a frame had more than 8192 locals and fixed unnecessary boxing of native return types.
• Vadim Belman improved the error message when trying to assign to a Nil value, and made sure you can not call .new on enums.
• Nick Logan fixed an issue that was causing unnecessary pre-compilation of modules.
• Elizabeth Mattijsen improved the performance of Str.subst, Str.match, Str.subst-mutate and Str.trans on the MoarVM backend, and made sure the return value of Str.match is threadsafe.
• Peter du Marchie van Voorthuysen removed a redundant .list method.
• And some smaller tweaks and fixes.

### New Raku modules

• Geo::Location “Provides location data for astronomical and other programs” by Tom Browder.
• GtkLayerShell “Interface with the Gtk Layer Shell” by Siavash Askari Nasr.
• Sway::Config “Parsing Sway window manager’s config” by Siavash Askari Nasr.
• Sway::PreviewKeys “Show preview windows for Sway modes’ key bindings” by Siavash Askari Nasr.
• Data::Generators “Generating random strings, words, pet names, vectors, and (tabular) datasets” by Anton Antonov.
• Ecosystem::Archive “Interface to the Raku Ecosystem Archive” by Elizabeth Mattijsen.
• Terminal::ANSIParser “ANSI/VT stream parser” by Geoffrey Broadwell.

### Winding down

Wow, so much new stuff to look at and/or try out! And a new record for the test-t benchmark! Another good week. Please don’t forget to think about the Raku Advent posts coming up. Less than a month to go now until the first one! See you next week for more Rakudo news.

## Rakudo Weekly News: 2021.43 Thank You

Oleksandr Kyriukhin has released the 2021.10 version of the Rakudo Compiler, which includes all of the work of the new MoarVM dispatch mechanism. This is the culmination of more than 1.5 year work by many people, but mostly by Jonathan Worthington. A historic step forward that lays the groundwork on more efficient executing of Raku programs, and actually delivers on a number of improvements.

Claudio Ramirez quickly provided Linux packages for this release.

This release is also historic for another reason: after having done 24 Rakudo compiler releases, Oleksandr Kyriukhin has decided it is time for someone else to take over this important task. Kudos to Oleksandr for having done so many releases!

### Videos

This week, a number of videos from recent and not so recent conferences became available:

### Do you like Red?

Then this is the time you can answer Fernando Correa de Oliveira‘s appeal to create the first stable release of Red, an ORM for the Raku Programming Language. Check out the issues of what still needs to be done. And the whole Raku community will be thankful!

### That time of the year is coming!

Only a few weeks to go, and it’s Advent Calendar time again! Make sure you get a slot in this year’s Advent Calendar for the Raku Programming Language, by adding your blog post (proposal) to the preliminary list of authors and articles!

### Mikhail’s Corner

Mikhail Khorkov tells us about their Raku Code Coverage module called App::RaCoCo in Measuring Code Coverage by Testing in Raku (original Russian version).

### Alexey’s Corner

Alexey Melezhik has added a language selection option to My Butterflies (for independent software reviews), for instance software for Raku only.

### Weeklies

Weekly Challenge #136 is available for your perusal.

### Core Developments

• Jonathan Wortington improved inlining of code that can never de-optimize.
• Daniel Green fixed several C-compiler warnings and some memory leaks and added JITting of the nqp::abs_i op.
• Stefan Seifert fixed an issue with dispatch guards on a de-optimization, the compilation of dispatch on routines that use non-standard native result types and removed unnecessary boxing of routines with native return types.
• Christian Bartolomäus fixed various JVM specific tests in nqp.
• Vadim Belman added an is-wrapped method to Routine objects.
• And some smaller tweaks and fixes.

### New Raku modules

• Terminal::LineEditor “Generalized terminal line editing” by Geoffrey Broadwell.
• AWS::SNS::Notification “Description of an AWS Simple Notification Service message” by Jonathan Stowe.
• DateTime::US “Time zone and DST information for US states and territories” by Tom Browder.
• PatternMatching “Library for pattern matching” by Siavash Askari Nasr.

### Winding down

A historic week. Some nice videos to watch. And a 2021.10 Rakudo Compiler release that made the new-disp work go mainstream, to try out! A good week. Check in again next week for more news about the Raku Programming Language.

## gfldex: Double inspiration

Quite a few of the posts prior to this one where inspired by a post of fellow blogger. I would like to double down on that today. Vadim wrangled with symbols and Fabio enjoyed renaming them. Having struggled with packages in the past, Vadim’s post was very helpful in making me realise, .HOW is how I can get hold of the object that is the package. And if Perl can do it, there is surely no way to stop Raku to have the same capability.

We want to re-export functions while changing their name. Just adding a prefix will do for now. That presents the first problem. Currently, there is no way to get named arguments to use handed to sub EXPORT. Any Hash will also be gobbled up. All we have are positional parameters. Since Raku is omni-paradigmatic, that wont pose a challenge.

use Transport {'WWW', :prefix<www->};

We can execute that block and use destructuring to get hold of the positional and any colonpair.

use v6.d;

sub EXPORT(&args) {
my ($module-name, *%tags) = args; my \module = (require ::($module-name));
my %exports = module.WHO<EXPORT>.WHO<DEFAULT>.WHO.&{.keys Z=> .values};
my %prefixed-exports = %exports.map: { .key.substr(0, 1) ~ %tags<prefix> ~ .key.substr(1..*) => .value };

%prefixed-exports.Map
}

The only trouble I had was with .WHO being a macro and not a method of Mu. So we need a symbol to hold the package, which is returned by require.

dd &www-jpost;
# OUTPUT: Sub jpost = sub jpost (|c) { #(Sub|94403524172704) ... }

I didn’t turn this into a proper module, yet. This needs more reading (what the Sub::Import is actually being used for) and thinking. A 1:1 translation from Perl seems to be the easy way and thus is likely not the most correct.

## vrurg: Merging Symbols Issue

First of all, I’d like to apologize for all the errors in this post. I just haven’t got time to properly proof-read it.

A while ago I was trying to fix a problem in Rakudo which, under certain conditions, causes some external symbols to become invisible for importing code, even if explicit use statement is used. And, indeed, it is really confusing when:

use L1::L2::L3::Class;
L1::L2::L3::Class.new;


fails with “Class symbol doesn’t exists in L1::L2::L3” error! It’s ok if use throws when there is no corresponding module. But .new??

## Skip This Unless You Know What A Package Is

This section is needed to understand the rest of the post. A package in Raku is a typeobject which has a symbol table attached. The table is called stash (stands for “symbol table hash”) and is represented by an instance of Stash class, which is, basically, is a hash with minor tweaks. Normally each package instance has its own stash. For example, it is possible to manually create two different packages with the same name:

my $p1a := Metamodel::PackageHOW.new_type(:name<P1>); my$p1b := Metamodel::PackageHOW.new_type(:name<P1>);
say $p1a.WHICH, " ",$p1a.WHO.WHICH; # P1|U140722834897656 Stash|140723638807008
say $p1b.WHICH, " ",$p1b.WHO.WHICH; # P1|U140722834897800 Stash|140723638818544


Note that they have different stashes as well.

A package is barely used in Raku as is. Usually we deal with packagy things like modules and classes.

## Back On The Track

Back then I managed to trace the problem down to deserialization process within MoarVM backend. At that point I realized that somehow it pulls in packagy objects which are supposed to be the same thing, but they happen to be different and have different stashes. Because MoarVM doesn’t (and must not) have any idea about the structure of high-level Raku objects, there is no way it could properly handle this situation. Instead it considers one of the conflicting stashes as “the winner” and drops the other one. Apparently, symbols unique to the “loser” are lost then.

It took me time to find out what exactly happens. But not until a couple of days ago I realized what is the root cause and how to get around the bug.

## Package Tree

What happens when we do something like:

module Foo {
module Bar {
}
}


How do we access Bar, speaking of the technical side of things? Foo::Bar syntax basically maps into Foo.WHO<Bar>. In other words, Bar gets installed as a symbol into Foo stash. We can also rewrite it with special syntax sugar: Foo::<Bar> because Foo:: is a representation for Foo stash.

So far, so good; but where do we find Foo itself? In Raku there is a special symbol called GLOBAL which is the root namespace (or a package if you wish) of any code. GLOBAL::, or GLOBAL.WHO is where one finds all the top-level symbols.

Say, we have a few packages like L11::L21, L11::L22, L12::L21, L12::L22. Then the namespace structure would be represented by this tree:

GLOBAL
- L11
- L21
- L22
- L12
- L21
- L22


Normally there is one per-process GLOBAL symbol and it belongs to the compunit which used to start the program. Normally it’s a .raku file, or a string supplied on command line with -e option, etc. But each compunit also gets its own GLOBALish package which acts as compunit’s GLOBAL until it is fully incorporated into the main code. Say, we declare a module in file Foo.rakumod:

unit module Foo;
sub print-GLOBAL($when) is export { say "$when: ", GLOBAL.WHICH, " ", GLOBALish.WHICH;
}


And use it in a script:

use Foo;
print-GLOBAL 'RUN ';


Then we can get an ouput like this:

LOAD: GLOBAL|U140694020262024 GLOBAL|U140694020262024
RUN : GLOBAL|U140694284972696 GLOBAL|U140694020262024


Notice that GLOBALish symbol remains the same object, whereas GLOBAL gets different. If we add a line to the script which also prints GLOBAL.WHICH then we’re going to get something like:

MAIN: GLOBAL|U140694284972696


Let’s get done with this part of the story for a while a move onto another subject.

## Compunit Compilation

This is going to be a shorter story. It is not a secret that however powerful Raku’s grammars are, they need some core developer’s attention to make them really fast. In the meanwhile, compilation speed is somewhat suboptimal. It means that if a project consist of many compunits (think of modules, for example), it would make sense to try to compile them in parallel if possible. Unfortunately, the compiler is not thread-safe either. To resolve this complication Rakudo implementation parallelizes compilation by spawning individual processes per each compunit.

For example, let’s refer back to the module tree example above and imagine that all modules are used by a script. In this case there is a chance that we would end up with six rakudo processes, each compiling its own L* module. Apparently, things get slightly more complicated if there are cross-module uses, like L11::L21 could refer to L21, which, in turn, refers to L11::L22, or whatever. In this case we need to use topological sort to determine in what order the modules are to be compiled; but that’s not the point.

The point is that since each process does independent compilation, each compunit needs independent GLOBAL to manage its symbols. For the time being, what we later know as GLOBALish serves this duty for the compiler.

Later, when all pre-compiled modules are getting incorporated into the code which uses them, symbols installed into each individual GLOBAL are getting merged together to form the final namespace, available for our program. There are even methods in the source, using merge_global in their names.

## TA-TA-TAAA!

(Note the clickable section header; I love the guy!)

Now, you can feel the catch. Somebody might have even guessed what it is. It crossed my mind after I was trying to implement legal symbol auto-registration which doesn’t involve using QAST to install a phaser. At some point I got an idea of using GLOBAL to hold a register object which would keep track of specially flagged roles. Apparently it failed due to the parallelized compilation mentioned above. It doesn’t matter, why; but at that point I started building a mental model of what happens when merge is taking place. And one detail drew my special attention: what happens if a package in a long name is not explicitly declared?

Say, there is a class named Foo::Bar::Baz one creates as:

unit class Foo::Bar;
class Baz { }


In this case the compiler creates a stub package for Foo. The stub is used to install class Bar. Then it all gets serialized into bytecode.

At the same time there is another module with another class:

unit class Foo::Bar::Fubar;


It is not aware of Foo::Bar::Baz, and the compiler has to create two stubs: Foo and Foo::Bar. And not only two versions of Foo are different and have different stashes; but so are the two versions of Bar where one is a real class, the other is a stub package.

Most of the time the compiler does damn good job of merging symbols in such cases. It took me stripping down a real-life code to golf it down to some minimal set of modules which reproduces the situation where a require call comes back with a Failure and a symbol becomes missing. The remaining part of this post will be dedicated to this example. In particular, this whole text is dedicated to one line.

Before we proceed further, I’d like to state that I might be speculating about some aspects of the problem cause because some details are gone from my memory and I don’t have time to re-investigate them. Still, so far my theory is backed by working workaround presented at the end.

To make it a bit easier to analyze the case, let’s start with namespace tree:

GLOBAL
- L1
- App
- L2
- Collection
- Driver
- FS


Rough purpose is for application to deal with some kind of collection which stores its items with help of a driver which is loaded dynamically, depending, say, on a user configuration. We have only driver implemented: File System (FS).

If you checkout the repository and try raku -Ilib symbol-merge.raku in the examples/2021-10-05-merge-symbols directory, you will see some output ending up with a line like Failure|140208738884744 (certainly true for up until Rakudo v2021.09 and likely to be so for at least a couple of versions later).

The key conflict in this example are modules Collection and Driver. The full name of Collection is L1::L2::Collection. L1 and L2 are both stubs. Driver is L1::L2::Collection::Driver and because it imports L1::L2, L2 is a class; but L1 remains to be a stub. By commenting out the import we’d get the bug resolved and the script would end up with something like:

L1::L2::Collection::FS|U140455893341088


This means that the driver module was successfully loaded and the driver class symbol is available.

Ok, uncomment the import and start the script again. And then once again to get rid of the output produced by compilation-time processes. We should see something like this:

[7329] L1 in L1::L2         : L1|U140360937889112
[7329] L1 in Driver         : L1|U140361742786216
[7329] L1 in Collection     : L1|U140361742786480
[7329] L1 in App            : L1|U140361742786720
[7329] L1 in MAIN           : L1|U140361742786720
[7329] L1 in FS             : L1|U140361742788136
Failure|140360664014848


We already know that L1 is a stub. Dumping object IDs also reveals that each compunit has its own copy of L1, except for App and the script (marked as MAIN). This is pretty much expected because each L1 symbol is installed at compile-time into per-compunit GLOBALish. This is where each module finds it. App is different because it is directly imported by the script and was compiled by the same compiler process, and shared its GLOBAL with the script.

Now comes the black magic. Open lib/L1/L2/Collection/FS.rakumod and uncomment the last line in the file. Then give it a try. The output would seem impossible at first; hell with it, even at second glance it is still impossible:

[17579] Runtime Collection syms      : (Driver)


Remember, this line belongs to L1::L2::Collection::FS! How come we don’t see FS in Collection stash?? No wonder that when the package cannot see itself others cannot see it too!

Here comes a bit of my speculation based on what I vaguely remember from the times ~2 years ago when I was trying to resolve this bug for the first time.

When Driver imports L1::L2, Collection gets installed into L2 stash, and Driver is recorded in Collection stash. Then it all gets serialized with Driver compunit.

Now, when FS imports Driver to consume the role, it gets the stash of L2 serialized at the previous stage. But its own L2 is a stub under L1 stub. So, it gets replaced with the serialized “real thing” which doesn’t have FS under Collection! Bingo and oops…

## A Workaround

Walk through all the example files and uncomment use L1 statement. That’s it. All compunits will now have a common anchor to which their namespaces will be attached.

The common rule would state that if a problem of the kind occurs then make sure there’re no stub packages in the chain from GLOBAL down to the “missing” symbol. In particular, commenting out use L1::L2 in Driver will get our error back because it would create a “hole” between L1 and Collection and get us back into the situation where conflicting Collection namespaces are created because they’re bound to different L2 packages.

It doesn’t really matter how exactly the stubs are avoided. For example, we can easily move use L1::L2 into Collection and make sure that use L1 is still part of L2. So, for simplicity a child package may import its parent; and parent may then import its parent; and so on.

Sure, this adds to the boilerplate. But I hope the situation is temporary and there will be a fix.

## Fix?

The one I was playing with required a compunit to serialize its own GLOBALish stash at the end of the compilation in a location where it would not be at risk of overwriting. Basically, it means cloning and storing it locally on the compunit (the package stash is part of the low-level VM structures). Then compunit mainline code would invoke a method on the Stash class which would forcibly merge the recorded symbols back right after deserialization of compunit’s bytecode. It was seemingly working, but looked more of a kind of a hack, than a real fix. This and a few smaller issues (like a segfault which I failed to track down) caused it to be frozen.

As I was thinking of it lately, more proper fix must be based upon a common GLOBAL shared by all compunits of a process. In this case there will be no worry about multiple stub generated for the same package because each stub will be shared by all compunits until, perhaps, the real package is found in one of them.

Unfortunately, the complexity of implementing the ‘single GLOBAL’ approach is such that I’m unsure if anybody with appropriate skill could fit it into their schedule.

## 6guts: The new MoarVM dispatch mechanism is here!

Around 18 months ago, I set about working on the largest set of architectural changes that Raku runtime MoarVM has seen since its inception. The work was most directly triggered by the realization that we had no good way to fix a certain semantic bug in dispatch without either causing huge performance impacts across the board or increasingly complexity even further in optimizations that were already riding their luck. However, the need for something like this had been apparent for a while: a persistent struggle to optimize certain Raku language features, the pain of a bunch of performance mechanisms that were all solving the same kind of problem but each for a specific situation, and a sense that, with everything learned since I founded MoarVM, it was possible to do better.

The result is the development of a new generalized dispatch mechanism. An overview can be found in my Raku Conference talk about it (slidesvideo); in short, it gives us a far more uniform architecture for all kinds of dispatch, allowing us to deliver better performance on a range of language features that have thus far been glacial, as well as opening up opportunities for new optimizations.

Today, this work has been merged, along with the matching changes in NQP (the Raku subset we use for bootstrapping and to implement the compiler) and Rakudo (the full Raku compiler and standard library implementation). This means that it will ship in the October 2021 releases.

In this post, I’ll give an overview of what you can expect to observe right away, and what you might expect in the future as we continue to build upon the possibilities that the new dispatch architecture has to offer.

### The big wins

The biggest improvements involve language features that we’d really not had the architecture to do better on before. They involved dispatch – that is, getting a call linked to a destination efficiently – but the runtime didn’t provide us with a way to “explain” to it that it was looking at a dispatch, let alone with the information needed to have a shot at optimizing it.

The following graph captures a number of these cases, and shows the level of improvement, ranging from a factor of 3.3 to 13.3 times faster.

Let’s take a quick look at each of these. The first, new-buf, asks how quickly we can allocate Bufs.

for ^10_000_000 {
Buf.new
}


Why is this a dispatch benchmark? Because Buf is not a class, but rather a role. When we try to make an instance of a role, it is “punned” into a class. Up until now, it works as follows:

1. We look up the new method
2. The find_method method would, if needed, create a pun of the role and cache it
3. It would return a forwarding closure that takes the arguments and gives them to the same method called on the punned class, or spelt in Raku code, -> $role-discarded, |args {$pun."$name"(|args) } 4. This closure would be invoked with the arguments This had a number of undesirable consequences: 1. While the pun was cached, we still had a bit of overhead to check if we’d made it already 2. The arguments got slurped and flattened, which costs something, and… 3. …the loss of callsite shape meant we couldn’t look up a type specialization of the method, and thus lost a chance to inline it too With the new dispatch mechanism, we have a means to cache constants at a given program location and to replace arguments. So the first time we encounter the call, we: 1. Get the role pun produced if needed 2. Resolve the new method on the class punned from the role 3. Produce a dispatch program that caches this resolved method and also replaces the role argument with the pun For the next thousands of calls, we interpret this dispatch program. It’s still some cost, but the method we’re calling is already resolved, and the argument list rewriting is fairly cheap. Meanwhile, after we get into some hundreds of iterations, on a background thread, the optimizer gets to work. The argument re-ordering cost goes away completely at this point, and new is so small it gets inlined – at which point the buffer allocation is determined dead and so goes away too. Some remaining missed opportunities mean we still are left with a loop that’s not quite empty: it busies itself making sure it’s really OK to do nothing, rather than just doing nothing. Next up, multiple dispatch with where clauses. multi fac($n where $n <= 1) { 1 } multi fac($n) { $n * fac($n - 1) }
for ^1_000_000 {
fac(5)
}


These were really slow before, since:

1. We couldn’t apply the multi-dispatch caching mechanism at all as soon as we had a where clause involved
2. We would run where clauses twice in the event the candidate was chosen: once to see if we should choose that multi candidate, and once again when we entered it

With the new mechanism, we:

1. On the first call, calculate a multiple dispatch plan: a linked list of candidates to work through
2. Invoke the one with the where clause, in a mode whereby if the signature fails to bind, it triggers a dispatch resumption. (If it does bind, it runs to completion)
3. In the event of a bind failure, the dispatch resumption triggers, and we attempt the next candidate

Once again, after the setup phase, we interpret the dispatch programs. In fact, that’s as far as we get with running this faster for now, because the specializer doesn’t yet know how to translate and further optimize this kind of dispatch program. (That’s how I know it currently stands no chance of turning this whole thing into another empty loop!) So there’s more to be had here also; in the meantime, I’m afraid you’ll just have to settle for a factor of ten speedup.

Here’s the next one:

proto with-proto(Int $n) { 2 * {*} } multi with-proto(Int$n) { $n + 1 } sub invoking-nontrivial-proto() { for ^10_000_000 { with-proto(20) } }  Again, on top form, we’d turn this into an empty loop too, but we don’t quite get there yet. This case wasn’t so terrible before: we did get to use the multiple dispatch cache, however to do that we also ended up having to allocate an argument capture. The need for this also blocked any chance of inlining the proto into the caller. Now that is possible. Since we cannot yet translate dispatch programs that resume an in-progress dispatch, we don’t yet get to further inline the called multi candidate into the proto. However, we now have a design that will let us implement that. This whole notion of a dispatch resumption – where we start doing a dispatch, and later need to access arguments or other pre-calculated data in order to do a next step of it – has turned out to be a great unification. The initial idea for it came from considering things like callsame: class Parent { method m() { 1 } } class Child is Parent { method m() { 1 + callsame } } for ^10_000_000 { Child.m; }  Once I started looking at this, and then considering that a complex proto also wants to continue with a dispatch at the {*}, and in the case a where clauses fails in a multi it also wants to continue with a dispatch, I realized this was going to be useful for quite a lot of things. It will be a bit of a headache to teach the optimizer and JIT to do nice things with resumes – but a great relief that doing that once will benefit multiple language features! Anyway, back to the benchmark. This is another “if we were smart, it’d be an empty loop” one. Previously, callsame was very costly, because each time we invoked it, it would have to calculate what kind of dispatch we were resuming and the set of methods to call. We also had to be able to locate the arguments. Dynamic variables were involved, which cost a bit to look up too, and – despite being an implementation details – these also leaked out in introspection, which wasn’t ideal. The new dispatch mechanism makes this all rather more efficient: we can cache the calculated set of methods (or wrappers and multi candidates, depending on the context) and then walk through it, and there’s no dynamic variables involved (and thus no leakage of them). This sees the biggest speedup of the lot – and since we cannot yet inline away the callsame, it’s (for now) measuring the speedup one might expect on using this language feature. In the future, it’s destined to optimize away to an empty loop. A module that makes use of callsame on a relatively hot path is OO::Monitors,, so I figured it would be interesting to see if there is a speedup there also. use OO::Monitors; monitor TestMonitor { method m() { 1 } } my$mon = TestMonitor.new;
for ^1_000_000 {
$mon.m(); }  monitor is a class that acquires a lock around each method call. The module provides a custom meta-class that adds a lock attribute to the class and then wraps each method such that it acquires the lock. There are certainly costly things in there besides the involvement of callsame, but the improvement to callsame is already enough to see a 3.3x speedup in this benchmark. Since OO::Monitors is used in quite a few applications and modules (for example, Cro uses it), this is welcome (and yes, a larger improvement will be possible here too). ### Caller side decontainerization I’ve seen some less impressive, but still welcome, improvements across a good number of other microbenchmarks. Even a basic multi dispatch on the + op: my$i = 0;
for ^10_000_000 {
$i =$i + $_; }  Comes out with a factor of 1.6x speedup, thanks primarily to us producing far tighter code with fewer guards. Previously, we ended up with duplicate guards in this seemingly straightforward case. The infix:<+> multi candidate would be specialized for the case of its first argument being an Int in a Scalar container and its second argument being an immutable Int. Since a Scalar is mutable, the specialization would need to read it and then guard the value read before proceeding, otherwise it may change, and we’d risk memory safety. When we wanted to inline this candidate, we’d also want to do a check that the candidate really applies, and so also would deference the Scalar and guard its content to do that. We can and do eliminate duplicate guards – but these guards are on two distinct reads of the value, so that wouldn’t help. Since in the new dispatch mechanism we can rewrite arguments, we can now quite easily do caller-side removal of Scalar containers around values. So easily, in fact, that the change to do it took me just a couple of hours. This gives a lot of benefits. Since dispatch programs automatically eliminate duplicate reads and guards, the read and guard by the multi-dispatcher and the read in order to pass the decontainerized value are coalesced. This means less repeated work prior to specialization and JIT compilation, and also only a single read and guard in the specialized code after it. With the value to be passed already guarded, we can trivially select a candidate taking two bare Int values, which means there’s no further reads and guards needed in the callee either. A less obvious benefit, but one that will become important with planned future work, is that this means Scalar containers escape to callees far less often. This creates further opportunities for escape analysis. While the MoarVM escape analyzer and scalar replacer is currently quite limited, I hope to return to working on it in the near future, and expect it will be able to give us even more value now than it would have been able to before. ### Further results The benchmarks shown earlier are mostly of the “how close are we to realizing that we’ve got an empty loop” nature, which is interesting for assessing how well the optimizer can “see through” dispatches. Here are a few further results on more “traditional” microbenchmarks: The complex number benchmark is as follows: my$total-re = 0e0;
for ^2_000_000 {
my $x = 5 + 2i; my$y = 10 + 3i;
my $z =$x * $x +$y;
$total-re =$total-re + $z.re } say$total-re;


That is, just a bunch of operators (multi dispatch) and method calls, where we really do use the result. For now, we’re tied with Python and a little behind Ruby on this benchmark (and a surprising 48 times faster than the same thing done with Perl’s Math::Complex), but this is also a case that stands to see a huge benefit from escape analysis and scalar replacement in the future.

my %h = a => 10, b => 12;
my $total = 0; for ^10_000_000 {$total = $total + %h<a> + %h<b>; }  And the hash store one is: my @keys = 'a'..'z'; for ^500_000 { my %h; for @keys { %h{$_} = 42;
}
}


The improvements are nothing whatsoever to do with hashing itself, but instead look to be mostly thanks to much tighter code all around due to caller-side decontainerization. That can have a secondary effect of bringing things under the size limit for inlining, which is also a big help. Speedup factors of 2x and 1.85x are welcome, although we could really do with the same level of improvement again for me to be reasonably happy with our results.

my $fh = open "longfile"; my$chars = 0;
for $fh.lines {$chars = $chars + .chars };$fh.close;
say $chars  Again, nothing specific to I/O got faster, but when dispatch – the glue that puts together all the pieces – gets a boost, it helps all over the place. (We are also decently competitive on this benchmark, although tend to be slower the moment the UTF-8 decoder can’t take it’s “NFG can’t possibly apply” fast path.) ### And in less micro things… I’ve also started looking at larger programs, and hearing results from others about theirs. It’s mostly encouraging: • The long-standing Text::CSV benchmark test-t has seen roughly 20% improvement (thanks to lizmat for measuring) • A simple Cro::HTTP test application gets through about 10% more requests per second • MoarVM contributor dogbert did comparative timings of a number of scripts; the most significant improvement saw a drop from 25s to 7s, most are 10%-30% faster, some without change, and only one that slowed down. • There’s around 2.5% improvement on compilation of CORE.setting, the standard library. However, a big pinch of salt is needed here: the compiler itself has changed in a number of places as part of the work, and there were a couple of things tweaked based on looking at profiles that aren’t really related to dispatch. • Agrammon, an application calculating farming emissions, has seen a slowdown of around 9%. I didn’t get to look at it closely yet, although glancing at profiling output the number of deoptimizations is relatively high, which suggests we’re making some poor optimization decisions somewhere. ### Smaller profiler output One unpredicted (by me), but also welcome, improvement is that profiler output has become significantly smaller. Likely reasons for this include: 1. The dispatch mechanism supports producing value results (either from constants, input arguments, or attributes read from input arguments). It entirely replaces an earlier mechanism, “specializer plugins”, which could map guards to a target to invoke, but always required a call to something – even if that something was the identity function. The logic was that this didn’t matter for any really hot code, since the identity function will trivially be inlined away. However, since profile size of the instrumenting profiler is a function of the number of paths through the call tree, trimming loads of calls to the identity function out of the tree makes it much smaller. 2. We used to make lots of calls to the sink method when a value was in sink context. Now, if we see that the type simply inherits that method from Mu, we elide the call entirely (again, it would inline away, but a smaller call graph is a smaller profile). 3. Multiple dispatch caching would previously always call the proto when the cache was missed, but would then not call an onlystar proto again when it got cache hits in the future. This meant the call tree under many multiple dispatches was duplicated in the profile. This wasn’t just a size issue; it was a bit annoying to have this effect show up in the profile reports too. To give an example of the difference, I took profiles from Agrammon to study why it might have become slower. The one from before the dispatcher work weighed in at 87MB; the one with the new dispatch mechanism is under 30MB. That means less memory used while profiling, less time to write the profile out to disk afterwards, and less time for tools to load the profiler output. So now it’s faster to work out how to make things faster. ### Is there any bad news? I’m afraid so. Startup time has suffered. While the new dispatch mechanism is more powerful, pushes more complexity out of the VM into high level code, and is more conducive to reaching higher peak performance, it also has a higher warmup time. At the time of writing, the impact on startup time seems to be around 25%. I expect we can claw some of that back ahead of the October release. ### What will be broken? Changes of this scale always come with an amount of risk. We’re merging this some weeks ahead of the next scheduled monthly release in order to have time for more testing, and to address any regressions that get reported. However, even before reaching the point of merging it, we have: • Ensured it passes the specification test suite, both in normal circumstances, but also under optimizer stressing (where we force it to prematurely optimize everything, so that we tease out optimizer bugs and – given how many poor decisions we force it to make – deoptimization bugs too) • Used blin to run the tests of ecosystem modules. This is a standard step when preparing Rakudo releases, but in this case we’ve aimed it at the new-disp branches. This found a number of regressions caused by the switch to the new dispatch mechanism, which have been addressed. • Patched or sent pull requests to a number of modules that were relying on unsupported internal APIs that have now gone away or changed, or on other implementation details. There were relatively few of these, and happily, many of them were fixed up by migrating to supported APIs (which likely didn’t exist at the time the modules were written). ### What happens next? As I’ve alluded to in a number of places in this post, while there are improvements to be enjoyed right away, there are also new opportunities for further improvement. Some things that are on my mind include: • Reworking callframe entry and exit. These are still decidedly too costly. Various changes that have taken place while working on the new dispatch mechanism have opened up new opportunities for improvement in this area. • Avoiding megamorphic pile-ups. Micro-benchmarks are great at hiding these. In fact, the callsame one here is a perfect example! The point we do the resumption of a dispatch is inside callsame, so all the inline cache entries of resumptions throughout the program stack up in one place. What we’d like is to have them attached a level down the callstack instead. Otherwise, the level of callsame improvement seen in micro-benchmarks will not be enjoyed in larger applications. This applies in a number of other situations too. • Applying the new dispatch mechanism to optimize further constructs. For example, a method call that results in invoking the special FALLBACK method could have its callsite easily rewritten to do that, opening the way to inlining. • Further tuning the code we produce after optimization. There is an amount of waste that should be relatively straightforward to eliminate, and some opportunities to tweak deoptimization such that we’re able to delete more instructions and still retain the ability to deoptimize. • Continuing with the escape analysis work I was doing before, which should now be rather more valuable. The more flexible callstack/frame handling in place should also unblock my work on scalar replacement of Ints (which needs a great deal of care in memory management, as they may box a big integer, not just a native integer). • Implementing specialization, JIT, and inlining of dispatch resumptions. ### Thank you I would like to thank TPF and their donors for providing the funding that has made it possible for me to spend a good amount of my working time on this effort. While I’m to blame for the overall design and much of the implementation of the new dispatch mechanism, plenty of work has also been put in by other MoarVM and Rakudo contributors – especially over the last few months as the final pieces fell into place, and we turned our attention to getting it production ready. I’m thankful to them not only for the code and debugging contributions, but also much support and encouragement along the way. It feels good to have this merged, and I look forward to building upon it in the months and years to come. ## vrurg: Secure JSONification? ### Published by Vadim Belman on 2021-09-14T00:00:00 There was an interesting discussion on IRC today. In brief, it was about exposing one’s database structures over API and security implications of this approach. I’d recommend reading the whole thing because Altreus delivers a good (and somewhat emotional 🙂) point on why such practice is most definitely bad design decision. Despite having minor objections, I generally agree to him. But I’m not wearing out my keyboard on this post just to share that discussion. There was something in it what made me feel as if I miss something. And it came to me a bit later, when I was done with my payjob and got a bit more spare resources for the brain to utilize. First of all, a bell rang when a hash was mentioned as the mediator between a database and API return value. I’m somewhat wary about using hashes as return values primarily for a reason of performance price and concurrency unsafety. Anyway, the discussion went on and came to the point where it touched the ground of blacklisting of a DB table fields vs. whitelisting. The latter is really worthy approach of marking those fields we want in a JSON (or a hash) rather than marking those we don’t want because blacklisting requires us to remember to mark any new sensitive field as prohibited explicitly. Apparently, it is easy to forget to stick the mark onto it. Doesn’t it remind you something? Aren’t we talking about hashes now? Isn’t it what we sometimes blame JavaScript for, that its objects are free-form with barely any reliable control over their structure? Thanks TypeScript for trying to get this fixed in some funky way, which I personally like more than dislike. That’s when things clicked together. I was giving this answer already on a different occasion: using a class instance is often preferable over a hash. In the light of the JSON/API safety this simple rule gets us to another rather interesting aspect. Here is an example SmokeMachine provided on IRC: to-json %( name => "{ .first-name } { .last-name }", password => "***" ) given$model


This was about returning basic user account information to a frontend. This is supposed to replace JSONification of a Red model like the following:

model Account {
has UInt $.id is serial is json-skip; has Str$.username is column{ ... };
has Str $.password is column{ ... } is json-skip; has Str$.first-name is column{ ... };
has Str $.last-name is column{ ... }; }  The model example is mine. By the way, in my opinion, neither first name nor last name do not belong to this model and must be part of a separate table where user’s personal data is kept. In more general case, a name must either be a long single field or an array where one can fit something like “Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso”. The model clearly demonstrates the blacklist approach with two fields marked as non-JSONifiable. Now, let’s make it the right way, as I see it: class API::Data::User { has Str:D$.username is required;
has Str $.first-name; has Str$.last-name;

method !FROM-MODEL($model) { self.new: username => .username, first-name => .first-name, last-name => .last-name given$model
}

multi method new(Account:D $model) { self!FROM-MODEL($model)
}

method COERCE(Account:D $model) { self!FROM-MODEL($model)
}
}


And now, somewhere in our code we can do:

method get-user-info(UInt:D $id) { to-json API::Data::User(Account.^load: :$id)
}


With Cro::RPC::JSON module this could be part of a general API class which would provide common interface to both front- and backend:

use Cro::RPC::JSON;
class API::User {
method get-user-info(UInt:D $id) is json-rpc { API::Data::User(Account.^load: :$id)
}
}


With such an implementation our Raku backend would get an instance of API::Data::User. In a TypeScript frontend code of a private project of mine I have something like the following snippet, where connection is an object derived from jayson module:

connection.call("get-user-info", id).then(
(user: User | undefined | null) => { ... }
);


What does it all eventually give us? First, API::Data::User provides the mechanism of whilelisting the fields we do want to expose in API. Note that with properly defined attributes we’re as explicit about that as only possible. And we do it declaratively one single place.

Second, the class prevents us from mistyping field names. It wouldn’t be possible to have something like %( usrname => $model.username, ... ) somewhere else in our codebase. Or, perhaps even more likely, to try %user<frst-name> and wonder where did the first name go? We also get the protection against wrong data types or undefined values. It is also likely that working with a class instance would be faster than with a hash. I have this subject covered in another post of mine. Heh, at some point I thought this post could fit into IRC format… 🤷 ## samcv: I am resigning from The Perl Foundation ### Published on 2021-08-07T00:00:00 It is with great sadness that I must announce my resignation as chair of the Perl Foundation’s Community Affairs Team (CAT, the team that responds to Code of Conduct reports), effective immediately. Normally this would be a hard decision to make, but I have no choice given TPF’s recent actions. A Charter and a Code of Conduct could and should have been passed many months ago by the Board of Directors. Sadly this has not happened. The TPF Board of Directors has now unilaterally retracted all of the CAT’s transparency reports of 2021 (first, second). This includes the second transparency report that the TPF Board itself approved the contents and penalties of. Retracting the CAT’s transparency reports sends the message the Board of Directors is not willing to support the CAT, and is not prioritizing the safety of the community. I was not involved in the decision by the Board of Directors. Remaining on the Community Affairs Team would imply I accept or support TPF’s actions. I do not. The reason given by the Board of Directors, was that the CAT shouldn’t have acted before a Charter was passed. And since the CAT acted without such a Charter, all of its reports need to be retracted. Even the ones previously approved by the same Board of Directors! I do not find this reasoning very compelling. While it is important to have a Charter passed and power delegated to a body that can enforce a Code of Conduct, the safety of the community should be more important! If the Board of Directors can pass a ban and transparency report, then later retract it (without involvement of the CAT), it also has the power to pass a Charter for the CAT, AND also have the power to retract the same Charter based on pubic pressure. This is what TPF’s retraction demonstrates to everyone, that even if a Charter is passed, TPF may give in to public pressure and walk back their own past statements. I find this unsettling. I have put in a large amount of work into creating a Charter for the CAT and a Code of Conduct. I have submitted this to the Board of Directors several times, each time refining it after comments from the Board of Directors. Even if imperfect, it is important to have some kind of Charter to work with! Sadly this has not happened. What has happened instead is backtracking and now finally retracting and erasing the CAT’s past reports. The #tpf-cat Slack channel was intended to be used by people working to improve the state of the Community Affairs Team and a Code of Conduct within the community. Instead of improving the state of the Community Affairs Team, it has instead been consumed with people trying to tear it down. I have not been on that Slack channel for several weeks now due to the bad behavior and personal attacks I have received there over a period of months. Effectively zero moderation, which I find unacceptable. I will not do any volunteer work for The Perl Foundation again until a Charter and TPF wide Code of Conduct is passed. I would also need to be confident that the TPF communication channels (be it Slack or whatever platform TPF will use) has an enforced Code of Conduct, moderation playbook, and independent moderators. Let me be clear, a Charter, a Code of Conduct and other documents have already been presented to the Board of Directors. It’s up to the Board of Directors to get it past the finish line, in whatever form it decides to do. Or not. In any case, it is clear that the Board of Directors is not supporting the Community Affairs Team in its current form, so it is time for me to take my leave. ## rakudo.org: Rakudo compiler, Release #148 (2021.07) ### Published on 2021-07-24T00:00:00 ## Jo Christian Oterhals: What not to do — how to mess up for loops in Raku #rakulang ### Published by Jo Christian Oterhals on 2021-07-02T12:31:57 ### What not to do — how to mess up for loops in Raku #rakulang I guess that for many of you what I’m about to write is fairly obvious. But I hadn’t really thought about for loops this way before. So you more or less witness my spontaneous reaction. The other day Joelle Maslak tweeted something that made me think. Joelle pointed out that in Raku the code blocks of for loops are just lambdas — anonymous functions. After seeing it I have to agree. Not only can I not unsee it, I find that it’s a thing of beauty. But what it did too was to make me think about what else is possible with for loops. One idea could be to clean up for loops in general. But what’d be more fun was to see if I could create examples that are possible but not things of beauty, i.e. introduce some complexity and worst practises that no one should ever copy. My first instinct was to check whether the lambdas could be replaced by non-anonymous functions. And they can, provided you flip the order of the for statement and the function: Now, this isn’t unique in any way. I include the example here just to prove a point: Anonymous blocks can be replaced with named subs. Many programming languages can do this, and you have probably done this lots of times (most of what I show here can be done in, say, JavaScript; but since it was Raku that made me think of this stuff, the examples will be in Raku). Personally, though, I’ve never thought about replacing for code blocks with subs. Mostly I’ve had a sub first and then called it from a for loop later. As I think about it, it makes sense to think about the sub and the loop simultaneously: Branching out the code loop into a sub can be a good way to shorten and clean up a piece of code. Especially when what happens in the block is a fairly long and maybe convoluted piece of code. It keeps the main code shorter and perhaps, hopefully, more readable. This way of doing it can also be used with gather/take and similar constructs. I had to use parentheses to make it work: OK, that was the easy — and perhaps obvious — ones. Now for the uglier stuff. Since everything in Raku is an object, even sub routines, you can reference subs in lists and arrays. In the example below I’ve got two subs, A and B, and reference them in the array @subs. What this enables us to do, is to loop through the array and invoke the subs from there. I include this example just as an exercise. Line 15 is basically a way to conditionally pick which sub to call. There may be some practical applications of this, although practicality is beside the point in this article. In any case — what’s possible with named subs is also possible with anonymous functions. But it can get even worse than this. Have a look at the following example: What we do here is creating code conditionally and dynamically (and, honestly, you should never do that). And, again, I haven’t considered whether this has a practical application or not. So is there a conclusion here? Not in the ordinary sense. But what it does, I guess, is to show that even if something is possible, it’s not necessarily something you should do. It’s the age old recommendation: Do as I say, not as I do. ## vrurg: My Work Environment ### Published by Vadim Belman on 2021-06-24T00:00:00 Just have noticed that normally I have 4 editors/IDEs running at the same time: • Comma for modules and an in-house project • Vim for scripts, blog and articles, and Perl • Atom for Rakudo core sources, where Vim Raku syntax support seems to be too slow for some large files • (Visual Code Studio)[https://code.visualstudio.com] for TypeScript+Vue Only Vim I could quit on occasion. What is your state of affairs? ## vrurg: An Error In The Roles Article ### Published by Vadim Belman on 2021-06-22T00:00:00 The recently published article contained a factual error, as was pointed out by Elizabeth Mattijsen. I stated that named arguments do not work in role parameterization but it is not true. Yet, what I was certain about is that there is clearly something wrong about them. It took a bit of my time and wandering around Rakudo sources to recover my memories. The actual problem about named parameters is less evident and I dedicated a section of the article to explain what is actually going on. In this post I will share more detailed explanation of what’s going on for those interested in it. If anybody wish to follow me by watching the code open the src/Perl6/Metamodel/ParametricRoleGroupHOW.nqp file in Rakudo sources. There we start with method parameterize. Remember in the meanwhile, that the code is NQP meaning it looks like Raku but it lacks many features of it. At the end of the method we find nqp::parameterizetype op. It is described in nqp ops docs. What we must pay attention to is the second parameter of the op which is named as parameter_array. This means one simple thing: the op is only able to recognize positional parameters. In the documentation we also find out that for a given set of parameters the op will return the same type parameterization. Apparently, this is how we make sure that R[Int, "ok"] will remain the same role currying everywhere. But what happens when named parameters are involved? To make it possible to dispatch over them ParametricRoleGroupHOW does a trick: it takes the slurpy hash of nameds and uses it as a single positional argument which is appended to the end of @args array of positionals. To be consistent, if there are no nameds are passed in, NO_NAMEDS constant is pushed instead. It is long in text, but short in the code: nqp::push(@args, %named_args || NO_NAMEDS);  Let’s say, we parameterize over R[Int, Str]. The @args array will be something like: [Int, Str, NO_NAMEDS]  No matter how many times we meet R[Int, Str] in Raku code, the @args array will remain consistent allowing nqp::parameterize to produce a consistent result. But as soon as R[Int, Str, :$foo] is used the array will look like:

[Int, Str, %named_args]


where %named_args is a slurpy parameter of the parameterize method:

method parameterize($obj, *@args, *%named_args) {  Each time the method is invoked it will be a different hash object, even if the same named arguments are used! This will effectively make it look like a different set of arguments for the parameterization code. Evidently, a different parametrization will be produced too. It is theoretically possible for the metamodel code to analyze the hash of nameds, and keep track of them, and re-use a hash if same set of arguments was previously used… But as I mentioned this in the article, the new dispatching should be able to handle things in a better and more performant way. ## rakudo.org: Rakudo compiler, Release #147 (2021.06) ### Published on 2021-06-19T00:00:00 ## vrurg: Did you know that … ### Published by Vadim Belman on 2021-06-17T00:00:00 Raku is full of surprises. Sometimes I read something what that me like “oh, really?”. Sometimes I realize than a fact evident for me is not so obvious for others. Here is one of the kind. Do you know that labels in Raku are objects? Take this: FOO: for ^1 { .say }  FOO: is not a syntax construct to place an anchor in code but a way to create a Label instance: FOO: dd FOO; BAR: say BAR;  Due to its special and even specific nature class Label doesn’t provide much of an API. And what is available are methods to interact with loops. These are: • next • redo • last Feels somewhat familiar, isn’t? FOO: for ^10 { .say; FOO.last; }  In a way we can say that last FOO is an indirect method invocation, even though it’s not really true as long as the core defines a multi-dispatch routine last, alongside with redo and next subs. But the corresponding routine candidates for labels actually do nothing but call Label’s methods. Once again, objects are just about everywhere in Raku. ## p6steve: Can Raku replace HTML? ### Published by p6steve on 2021-06-09T22:03:47 In my last post, I listed three recent posts that got me thinking about Raku and HTML. I wondered if two of these could be used together to streamline the composition of web sites. ## Act #1 – LPQ Is drawn from a great idea of gfldex – Low Profile Quoting. Here’s my interpretation: method init-qform() { my$css = q:to/END/;
#demoFont {
font-size: 16px;
color: #ff0000;
}
END

my $size = <40>; my$pattern = <[a-zA-Z0-9 ]+>;

my $html = §<html>(§<head>(§<title>()), §<style>(:type<text/css>,$css),
§<body>(
§<form>(:action<.action>, :method<post>,
§<input>(:type<text>, :required, :name<cf-name>,
:value<.cf-name>, :$size, :$pattern,),

§<p>('Your Email (required)'),    #email type validates input
§<input>(:type<email>, :required, :name<cf-email>,
:value<.cf-email>, :$size,), §<p>('Your Subject (required)'), §<input>(:type<text>, :required, :name<cf-subject>, :value<.cf-subject>, :$size, :$pattern,), §<p>(:id<demoFont>, 'Your Message (required)'), §<p>(§<textarea>(:rows<10>, :cols<35>, :required, :name<cf-message>, '<.cf-message>', ),), §<input>(:type<submit>, :name<cf-submitted>, :value<Send>,), ) ) ); spurt "templates/qform.crotmp", pretty-print-html($html);
}

In the words of the originator “While casting my Raku spells I once again had felt the urge for a simply but convenient way to inline fragments of html in code. The language leans itself to the task with colon pairs and slurpy arrays.

The full code is available at https://github.com/p6steve/CroTemplateTest for your perusal. Here we have configured Raku with a §<html> shortcut that replaces the usual HTML <open attr=”value”>payload</close> tags. (The syntax magic is that ‘§’ is defined as a Class with Associative accessor.)

So what can this do for me?

• express HTML components within a richer logical context
• reduces the impedance of forced separation of component logic
• tags are now function calls – so no more open/close boilerplate
• the smooth Raku attribute syntax … :name<value> is used
• variables ($size,$pattern) help you to DRY
• it works with css

## Act #2 – Cro

BUT – how can Act #1 co-exist with the Cro::WebApp::Template concepts? Sharp eyed readers may have noticed that the HTML above has a couple of examples of that already:

• :value<.cf-email> … places $context.cf-email variable in attribute • ‘<.cf-message>’, … places$context.cf-message variable in payload

I thoroughly recommend the curious reader to review the Raku Cro services documentation

So the above init-qform method generates this .crotmp code:

<!DOCTYPE html>
<html>
<title>
</title>
<style type="text/css">#demoFont {
font-size: 16px;
color: #ff0000;
}
</style>
<body>
<form action="<.action>" method="post">
<input type="text" size="40" required pattern="[a-zA-Z0-9 ]+" value="<.cf-name>" name="cf-name" />
<input size="40" required value="<.cf-email>" type="email" name="cf-email" />
<input value="<.cf-subject>" required name="cf-subject" type="text" pattern="[a-zA-Z0-9 ]+" size="40" />
<p>
<textarea name="cf-message" rows="10" required cols="35">
<.cf-message>
</textarea>
</p>
<input name="cf-submitted" type="submit" value="Send" />
</form>
</body>
</html>

Then we can set up a context:

class Context {
has $.action = 'mailto:[email protected]'; has$.cf-name = 'p6steve';
has $.cf-email = '[email protected]'; has$.cf-subject = 'Raku does HTML';
has $.cf-message = 'Describe some of your feelings about this...'; } And apply the context to process the template in a Cro::Routes files; use Cro::HTTP::Router; use Cro::WebApp::Template; use Cro::TemplateTest::Workshop; my Workshop$ws = Workshop.new;

sub routes() is export {
route {
get -> 'qform' {
my $context =$ws.context;
template 'templates/qform.crotmp', $context; } } } ## Best of Both This post illustrates how Raku can combine detailed syntax control to smoothly embed HTML within code logic. This helps to refactor awkward syntax islands so that the underlying problem-solution logic can be encapsulated and clearly expressed, It demonstrated the practical combination of the Cro template language with innate Raku power-of-expression to drive more comprehensible, consistent and maintainable code. Comments and feedback very welcome… ~p6steve ## p6steve: Doing Some Funky HTML Sh*t with Raku ### Published by p6steve on 2021-06-04T11:45:38 Came across some pretty funky PHP/HTML the other day. No, I did not write it! (btw using echo is considered bad practice) function html_form_code() { echo '<form action="' . esc_url($_SERVER['REQUEST_URI'] ) . '" method="post">';
echo '<p>';
echo 'Your Name (required) <br />';
echo '<input type="text" name="cf-name" pattern="[a-zA-Z0-9 ]+"
value="' . ( isset( $_POST["cf-name"] ) ? esc_attr($_POST["cf-name"] ) : '' ) . '" size="40" />';
echo '</p>';
echo '<p>';
echo 'Your Email (required) <br />';
echo '<input type="email" name="cf-email" value="' . ( isset( $_POST["cf-email"] ) ? esc_attr($_POST["cf-email"] ) : '' ) . '" size="40" />';
echo '</p>';
echo '<p>';
echo 'Subject (required) <br />';
echo '<input type="text" name="cf-subject" pattern="[a-zA-Z ]+"
value="' . ( isset( $_POST["cf-subject"] ) ? esc_attr($_POST["cf-subject"] ) : '' ) . '" size="40" />';
echo '</p>';
echo '<p>';
echo 'Your Message (required) <br />';
echo '<textarea rows="10" cols="35" name="cf-message">' . ( isset( $_POST["cf-message"] ) ? esc_attr($_POST["cf-message"] ) : '' ) . '</textarea>';
echo '</p>';
echo '<p><input type="submit" name="cf-submitted" value="Send"/></p>';
echo '</form>';
}

By coincidence, three HTML-ish Raku ideas have recently popped into my inbox courtesy of the Raku Weekly rag:

So this all got me wondering what my funky PHP/HTML sample would look like in a fully fledged Cro / Raku style… in the spirit of keeping this post briefish, I will skip the Cro Templates and CSS parsing for now and hope to cover them in subsequent missives…

First, here is gfldex’s code copied to my source file:

constant term:<␣> = ' ';
constant term:<¶> = $?NL; constant term:<§> = class :: does Associative { sub qh($s) {
$s.trans([ '<' , '>' , '&' ] => [ '&lt;', '&gt;', '&amp;' ]) } role NON-QUOTE {} method AT-KEY($key) {
when $key ~~ /^ '&' / {$key does NON-QUOTE
}
when $key ~~ /\w+/ { sub (*@a, *%_) { ##dd @a; ('<' ~$key ~ (+%_ ?? ␣ !! '')
~ %_.map({ .key ~ '="' ~ .value ~ '"'  }).join(' ') ~ '>'
~ @a.map({ $^e ~~ NON-QUOTE ??$^e !! $^e.&qh }).join('') ~ '</' ~$key ~ '>') does NON-QUOTE
}
}
}

Sharp eyed viewers will notice that I have made one change … replacing the constant ‘html’ with term: < § > … this is called the section symbol and lurks towards the top left of your keyboard. I find this greatly improves the readability of my embedded html.

So, here’s how my PHP example looks in modern Raku stylee:

my $action = 'mailto:[email protected]'; my$cf-name = 'p6steve';
my $cf-email = '[email protected]'; my$cf-subject = 'Raku does HTML';
my $cf-message = 'Describe your feelings about this...'; my$size = <40>;
my $pattern = <[a-zA-Z0-9 ]+>; put '<!DOCTYPE html>'; put §<html>( ¶, §<body>( ¶, §<form>(:$action, :method<post>, ¶,
§<input>(:type<text>, :required, :name<cf-name>,
:value($cf-name), :$size, :$pattern,), ¶, §<p>('Your Email (required)'), ¶, #email type validates input §<input>(:type<email>, :required, :name<cf-email>, :value($cf-email), :$size), ¶, §<p>('Your Subject (required)'), ¶, §<input>(:type<text>, :required, :name<cf-subject>, :value($cf-subject), :$size, :$pattern,), ¶,

§<p>(§<textarea>(:rows<10>, :cols<35>, :required,
:name<cf-message>, $cf-message, ),), ¶, §<input>(:type<submit>, :name<cf-submitted>, :value<Send>,), ¶, ) ) ); Thoughtfully glfdex includes a para character ¶ term, to make line breaks in the output html source to keep it human friendly. And here is the html output: <DOCTYPE html> <html> <body> <form action="mailto:[email protected]" method="post"> <p>Your Name (required)</p> <input pattern="[a-zA-Z0-9 ]+" type="text" name="cf-name" size="40" required="True" value="p6steve"></input> <p>Your Email (required)</p> <input size="40" value="[email protected]" type="email" required="True" name="cf-email"></input> <p>Your Subject (required)</p> <input name="cf-subject" pattern="[a-zA-Z0-9 ]+" type="text" size="40" value="Raku does HTML" required="True"></input> <p>Your Message (required)</p> <p><textarea required="True" cols="35" rows="10" name="cf-message">Describe your feelings about this...</textarea></p> <input name="cf-submitted" value="Send" type="submit"></input> </form></body></html> Personally I love to write (and read) html when done in this kind of programmatic style. Not least it has cut 19 lines of embedded code to 10 lines (and that means I can squish more code into my screen and into my brain). No longer do I have to dance my right pinkie around the < / > keys or worry about leaving out the closing end tags!! Another neat helper is the Raku pair syntax, so if I define a scalar with the same name as the attribute name, I can avoid repetitive typing and the consequent opportunity to make a mistake… e.g. the :$action attribute in the form tag.

Hopefully in the next instalment, I will be able to combine the power of Cro::Web::Template to apply the substitution and escape pieces…

## Jo Christian Oterhals: Tim Cook and the slow-burning revolution

It’s almost 10 years since Tim Cook took the reins at Apple. A lot has happened since. But many still talk about him as if he’s just taken over, often lamenting that Apple is not as innovative now as it was under Steve Jobs.

I for one don’t understand why people would think that. It is an undeserved underestimation of him.

Yes, Cook is substantially more grey and dull than Jobs. But something good must have happened in his period as CEO — arguably even something better than under Jobs. Because at the time Cook took over Apple’s market cap was $354 billion. At the time of writing — May 14, 2021 — Apple’s worth$2048 billion. Tim’s Apple is in other words almost 6x as valuable as Steve’s Apple. You don’t get that kind of growth and valuation by only being a pencil pusher.

So why does he get so little credit? There have arguably been a couple of revolutions under him too, but they are not as easy to spot as before. It all comes down to style. Let’s start with a couple of examples.

### New consumer products

The Apple Watch and the AirPods are Cook products. They may not have defined a new segment (Apple products seldom do), and they may not have impressed initially (more on that later), but they’ve grown to dominate the wearables segment with a 51 % market share. In 2019 Apple claimed that their wearable product segment alone was the size of a Fortune 200 company.

The difference from the Jobs days, however, is that new product launches now use a few years to find their roles. Jobs was a master at defining what something was from the get go, whereas Cook’s Apple uses time and patience to let the new products find their place in the world.

Take the Apple Watch as an example. Starting as a run-of-the-mill smart watch — although a beautiful one — the Apple Watch has iterated into a health focused power house. It is on the brink of revolutionising how we monitor and predict health issues in a way we’ve never been able to before. There were smartwatches before Apple, but the Apple product put the rest of the industry in a catch-up mode.

The AirPods have a similar story. Starting as run-of-the-mill earbuds they’ve grown into feature rich gizmos with spatial sound, Siri integration, and lots of other stuff. As was the case with Apple Watch, they once again put other dominant firms (Sony springs to mind) in catch-up mode.

### The invisible underpinnings

Lastly their ARM based chip designs are what made these other gadgets possible. No AirPods without the S1 system-on-a-chip. No Apple Watch without the U1. Initially they built the iPhones on third-party chip sets. But they (and the iPad) wouldn’t have grown into what they are now without the A series chips. The high-end iPads and the Macs have just gotten the proprietary M1 chipset. One can just speculate as to what these products can become when things settle down a bit.

I don’t own either of these gadgets and is a casual iPhone user at best. So I can’t be accused of being a fan boy. But even so I find it hard to underestimate the Cook era Apple. As you see I think Apple is just as revolutionary now as it was under Jobs. Their products are, eventually, just as groundbreaking. It’s just that Cook plays the long game and sees the revolution play out over time.

The two couldn’t be more different. But you’d be hard pressed to find flaws in their respective results.

This article started as a comment to Erik Engheim’s article Apple is Turning Into the Next Microsoft.

## 6guts: Raku multiple dispatch with the new MoarVM dispatcher

I recently wrote about the new MoarVM dispatch mechanism, and in that post noted that I still had a good bit of Raku’s multiple dispatch semantics left to implement in terms of it. Since then, I’ve made a decent amount of progress in that direction. This post contains an overview of the approach taken, and some very rough performance measurements.

### My goodness, that’s a lot of semantics

Of all the kinds of dispatch we find in Raku, multiple dispatch is the most complex. Multiple dispatch allows us to write a set of candidates, which are then selected by the number of arguments:

multi ok($condition,$desc) {
say ($condition ?? 'ok' !! 'not ok') ~ " -$desc";
}
multi ok($condition) { ok($condition, '');
}


Or the types of arguments:

multi to-json(Int $i) { ~$i }
multi to-json(Bool $b) {$b ?? 'true' !! 'false' }


And not just one argument, but potentially many:

multi truncate(Str $str, Int$chars) {
$str.chars <$chars ?? $str !!$str.substr(0, $chars) ~ '...' } multi truncate(Str$str, Str $after) { with$str.index($after) ->$pos {
$str.substr(0,$pos) ~ '...'
}
else {
$str } }  We may write where clauses to differentiate candidates on properties that are not captured by nominal types: multi fac($n where $n <= 1) { 1 } multi fac($n) { $n * fac($n - 1) }


Every time we write a set of multi candidates like this, the compiler will automatically produce a proto routine. This is what is installed in the symbol table, and holds the candidate list. However, we can also write our own proto, and use the special term {*} to decide at which point we do the dispatch, if at all.

proto mean($collection) {$collection.elems == 0 ?? Nil !! {*}
}
multi mean(@arr) {
@arr.sum / @arr.elems
}
multi mean(%hash) {
%hash.values.sum / %hash.elems
}


Candidates are ranked by narrowness (using topological sorting). If multiple candidates match, but they are equally narrow, then that’s an ambiguity error. Otherwise, we call narrowest one. The candidate we choose may then use callsame and friends to defer to the next narrowest candidate, which may do the same, until we reach the most general matching one.

### Multiple dispatch is everywhere

Raku leans heavily on multiple dispatch. Most operators in Raku are compiled into calls to multiple dispatch subroutines. Even $a +$b will be a multiple dispatch. This means doing multiple dispatch efficiently is really important for performance. Given the riches of its semantics, this is potentially a bit concerning. However, there’s good news too.

### Most multiple dispatches are boring

The overwhelmingly common case is that we have:

• A decision made only by the number of arguments and nominal types
• No where clauses
• No custom proto
• No callsame

This isn’t to say the other cases are unimportant; they are really quite useful, and it’s desirable for them to perform well. However, it’s also desirable to make what savings we can in the common case. For example, we don’t want to eagerly calculate the full set of possible candidates for every single multiple dispatch, because the majority of the time only the first one matters. This is not just a time concern: recall that the new dispatch mechanism stores dispatch programs at each callsite, and if we store the list of all matching candidates at each of those, we’ll waste a lot of memory too.

### How do we do today?

The situation in Rakudo today is as follows:

• If the dispatch is decided by arity and nominal type only, and you don’t call it with flattening args, it’ll probably perform quite decently, and perhaps even enjoy inlining of the candidate and elimination of duplicate type checks that would take place on the slow path. This is thanks to the proto holding a “dispatch cache”, a special-case mechanism implemented in the VM that uses a search tree, with one level per argument.
• If that’s the case but it has a custom proto, it’s not too bad either, though inlining isn’t going to be happening; it can still use the search tree, though
• If it uses where clauses, it’ll be slow, because the search tree only deals in finding one candidate per set of nominal types, and so we can’t use it
• The same reasoning applies to callsame; it’ll be slow too

Effectively, the situation today is that you simply don’t use where clauses in a multiple dispatch if its anywhere near a hot path (well, and if you know where the hot paths are, and know that this kind of dispatch is slow). Ditto for callsame, although that’s less commonly reached for. The question is, can we do better with the new dispatcher?

### Guard the types

Let’s start out with seeing how the simplest cases are dealt with, and build from there. (This is actually what I did in terms of the implementation, but at the same time I had a rough idea where I was hoping to end up.)

Recall this pair of candidates:

multi truncate(Str $str, Int$chars) {
$str.chars <$chars ?? $str !!$str.substr(0, $chars) ~ '...' } multi truncate(Str$str, Str $after) { with$str.index($after) ->$pos {
$str.substr(0,$pos) ~ '...'
}
else {
$str } }  We then have a call truncate($message, "\n"), where $message is a Str. Under the new dispatch mechanism, the call is made using the raku-call dispatcher, which identifies that this is a multiple dispatch, and thus delegates to raku-multi. (Multi-method dispatch ends up there too.) The record phase of the dispatch – on the first time we reach this callsite – will proceed as follows: 1. Iterate over the candidates 2. If a candidate doesn’t match on argument count, just discard it. Since the shape of a callsite is a constant, and we calculate dispatch programs at each callsite, we don’t need to establish any guards for this. 3. If it matches on types and concreteness, note which parameters are involved and what kinds of guards they need. 4. If there was no match or an ambiguity, report the error without producing a dispatch program. 5. Otherwise, having established the type guards, delegate to the raku-invoke dispatcher with the chosen candidate. When we reach the same callsite again, we can run the dispatch program, which quickly checks if the argument types match those we saw last time, and if they do, we know which candidate to invoke. These checks are very cheap – far cheaper than walking through all of the candidates and examining each of them for a match. The optimizer may later be able to prove that the checks will always come out true and eliminate them. Thus the whole of the dispatch processes – at least for this simple case where we only have types and arity – can be “explained” to the virtual machine as “if the arguments have these exact types, invoke this routine”. It’s pretty much the same as we were doing for method dispatch, except there we only cared about the type of the first argument – the invocant – and the value of the method name. (Also recall from the previous post that if it’s a multi-method dispatch, then both method dispatch and multiple dispatch will guard the type of the first argument, but the duplication is eliminated, so only one check is done.) ### That goes in the resumption hole Coming up with good abstractions is difficult, and therein lies much of the challenge of the new dispatch mechanism. Raku has quite a number of different dispatch-like things. However, encoding all of them directly in the virtual machine leads to high complexity, which makes building reliable optimizations (or even reliable unoptimized implementations!) challenging. Thus the aim is to work out a comparatively small set of primitives that allow for dispatches to be “explained” to the virtual machine in such a way that it can deliver decent performance. It’s fairly clear that callsame is a kind of dispatch resumption, but what about the custom proto case and the where clause case? It turns out that these can both be neatly expressed in terms of dispatch resumption too (the where clause case needing one small addition at the virtual machine level, which in time is likely to be useful for other things too). Not only that, but encoding these features in terms of dispatch resumption is also quite direct, and thus should be efficient. Every trick we teach the specializer about doing better with dispatch resumptions can benefit all of the language features that are implemented using them, too. ### Custom protos Recall this example: proto mean($collection) {
$collection.elems == 0 ?? Nil !! {*} }  Here, we want to run the body of the proto, and then proceed to the chosen candidate at the point of the {*}. By contrast, when we don’t have a custom proto, we’d like to simply get on with calling the correct multi. To achieve this, I first moved the multi candidate selection logic from the raku-multi dispatcher to the raku-multi-core dispatcher. The raku-multi dispatcher then checks if we have an “onlystar” proto (one that does not need us to run it). If so, it delegates immediately to raku-multi-core. If not, it saves the arguments to the dispatch as the resumption initialization state, and then calls the proto. The proto‘s {*} is compiled into a dispatch resumption. The resumption then delegates to raku-multi-core. Or, in code: nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-multi', # Initial dispatch, only setting up resumption if we need to invoke the # proto. ->$capture {
my $callee := nqp::captureposarg($capture, 0);
my int $onlystar := nqp::getattr_i($callee, Routine, '$!onlystar'); if$onlystar {
# Don't need to invoke the proto itself, so just get on with the
# candidate dispatch.
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-multi-core', $capture); } else { # Set resume init args and run the proto. nqp::dispatch('boot-syscall', 'dispatcher-set-resume-init-args',$capture);
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-invoke', $capture); } }, # Resumption means that we have reached the {*} in the proto and so now # should go ahead and do the dispatch. Make sure we only do this if we # are signalled to that it's a resume for an onlystar (resumption kind 5). ->$capture {
my $track_kind := nqp::dispatch('boot-syscall', 'dispatcher-track-arg',$capture, 0);
nqp::dispatch('boot-syscall', 'dispatcher-guard-literal', $track_kind); my int$kind := nqp::captureposarg_i($capture, 0); if$kind == 5 {
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-multi-core',
nqp::dispatch('boot-syscall', 'dispatcher-get-resume-init-args'));
}
elsif !nqp::dispatch('boot-syscall', 'dispatcher-next-resumption') {
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-constant',
nqp::dispatch('boot-syscall', 'dispatcher-insert-arg-literal-obj',
$capture, 0, Nil)); } });  ### Two become one Deferring to the next candidate (for example with callsame) and trying the next candidate because a where clause failed look very similar: both involve walking through a list of possible candidates. There’s some details, but they have a great deal in common, and it’d be nice if that could be reflected in how multiple dispatch is implemented using the new dispatcher. Before that, a slightly terrible detail about how things work in Rakudo today when we have where clauses. First, the dispatcher does a “trial bind”, where it asks the question: would this signature bind? To do this, it has to evaluate all of the where clauses. Worse, it has to use the slow-path signature binder too, which interprets the signature, even though we can in many cases compile it. If the candidate matches, great, we select it, and then invoke it…which runs the where clauses a second time, as part of the compiled signature binding code. There is nothing efficient about this at all, except for it being by far more efficient on developer time, which is why it happened that way. Anyway, it goes without saying that I’m rather keen to avoid this duplicate work and the slow-path binder where possible as I re-implement this using the new dispatcher. And, happily, a small addition provides a solution. There is an op assertparamcheck, which any kind of parameter checking compiles into (be it type checking, where clause checking, etc.) This triggers a call to a function that gets the arguments, the thing we were trying to call, and can then pick through them to produce an error message. The trick is to provide a way to invoke a routine such that a bind failure, instead of calling the error reporting function, will leave the routine and then do a dispatch resumption! This means we can turn failure to pass where clause checks into a dispatch resumption, which will then walk to the next candidate and try it instead. ### Trivial vs. non-trivial This gets us most of the way to a solution, but there’s still the question of being memory and time efficient in the common case, where there is no resumption and no where clauses. I coined the term “trivial multiple dispatch” for this situation, which makes the other situation “non-trivial”. In fact, I even made a dispatcher called raku-multi-non-trivial! There are two ways we can end up there. 1. The initial attempt to find a matching candidate determines that we’ll have to consider where clauses. As soon as we see this is the case, we go ahead and produce a full list of possible candidates that could match. This is a linked list (see my previous post for why). 2. The initial attempt to find a matching candidate finds one that can be picked based purely on argument count and nominal types. We stop there, instead of trying to build a full candidate list, and run the matching candidate. In the event that a callsame happens, we end up in the trivial dispatch resumption handler, which – since this situation is now non-trivial – builds the full candidate list, snips the first item off it (because we already ran that), and delegates to raku-multi-non-trivial. Lost in this description is another significant improvement: today, when there are where clauses, we entirely lose the ability to use the MoarVM multiple dispatch cache, but under the new dispatcher, we store a type-filtered list of candidates at the callsite, and then cheap type guards are used to check it is valid to use. ### Preliminary results I did a few benchmarks to see how the new dispatch mechanism did with a couple of situations known to be sub-optimal in Rakudo today. These numbers do not reflect what is possible, because at the moment the specializer does not have much of an understanding of the new dispatcher. Rather, they reflect the minimal improvement we can expect. Consider this benchmark using a multi with a where clause to recursively implement factorial. multi fac($n where $n <= 1) { 1 } multi fac($n) { $n * fac($n - 1) }
for ^100_000 {
fac(10)
}
say now - INIT now;


This needs some tweaks (and to be run under an environment variable) to use the new dispatcher; these are temporary, until such a time I switch Rakudo over to using the new dispatcher by default:

use nqp;
multi fac($n where$n <= 1) { 1 }
multi fac($n) {$n * nqp::dispatch('raku-call', &fac, $n - 1) } for ^100_000 { nqp::dispatch('raku-call', &fac, 10); } say now - INIT now;  On my machine, the first runs in 4.86s, the second in 1.34s. Thus under the new dispatcher this runs in little over a quarter of the time it used to – a quite significant improvement already. A case involving callsame is also interesting to consider. Here it is without using the new dispatcher: multi fallback(Any$x) { "a$x" } multi fallback(Numeric$x) { "n" ~ callsame }
multi fallback(Real $x) { "r" ~ callsame } multi fallback(Int$x) { "i" ~ callsame }
for ^1_000_000 {
fallback(4+2i);
fallback(4.2);
fallback(42);
}
say now - INIT now;


And with the temporary tweaks to use the new dispatcher:

use nqp;
multi fallback(Any $x) { "a$x" }
multi fallback(Numeric $x) { "n" ~ new-disp-callsame } multi fallback(Real$x) { "r" ~ new-disp-callsame }
multi fallback(Int $x) { "i" ~ new-disp-callsame } for ^1_000_000 { nqp::dispatch('raku-call', &fallback, 4+2i); nqp::dispatch('raku-call', &fallback, 4.2); nqp::dispatch('raku-call', &fallback, 42); } say now - INIT now;  On my machine, the first runs in 31.3s, the second in 11.5s, meaning that with the new dispatcher we manage it in a little over a third of the time that current Rakudo does. These are both quite encouraging, but as previously mentioned, a majority of multiple dispatches are of the trivial kind, not using these features. If I make the most common case worse on the way to making other things better, that would be bad. It’s not yet possible to make a fair comparison of this: trivial multiple dispatches already receive a lot of attention in the specializer, and it doesn’t yet optimize code using the new dispatcher well. Of note, in an example like this: multi m(Int) { } multi m(Str) { } for ^1_000_000 { m(1); m("x"); } say now - INIT now;  Inlining and other optimizations will turn this into an empty loop, which is hard to beat. There is one thing we can already do, though: run it with the specializer disabled. The new dispatcher version looks like this: use nqp; multi m(Int) { } multi m(Str) { } for ^1_000_000 { nqp::dispatch('raku-call', &m, 1); nqp::dispatch('raku-call', &m, "x"); } say now - INIT now;  The results are 0.463s and 0.332s respectively. Thus, the baseline execution time – before the specializer does its magic – is less using the new general dispatch mechanism than it is using the special-case multiple dispatch cache that we currently use. I wasn’t sure what to expect here before I did the measurement. Given we’re going from a specialized mechanism that has been profiled and tweaked to a new general mechanism that hasn’t received such attention, I was quite ready to be doing a little bit worse initially, and would have been happy with parity. Running in 70% of the time was a bigger improvement than I expected at this point. I expect that once the specializer understands the new dispatch mechanism better, it will be able to also turn the above into an empty loop – however, since more iterations can be done per-optimization, this should still show up as a win for the new dispatcher. ### Final thoughts With one relatively small addition, the new dispatch mechanism is already handling most of the Raku multiple dispatch semantics. Furthermore, even without the specializer and JIT really being able to make a good job of it, some microbenchmarks already show a factor of 3x-4x improvement. That’s a pretty good starting point. There’s still a good bit to do before we ship a Rakudo release using the new dispatcher. However, multiple dispatch was the biggest remaining threat to the design: it’s rather more involved than other kinds of dispatch, and it was quite possible that an unexpected shortcoming could trigger another round of design work, or reveal that the general mechanism was going to struggle to perform compared to the more specialized one in the baseline unoptimized, case. So far, there’s no indication of either of these, and I’m cautiously optimistic that the overall design is about right. ## p6steve: raku:34 python:19 extreme math ### Published by p6steve on 2021-04-02T17:56:04 Coming off the excellent raku weekly news, my curiosity was piqued by a tweet about big-endian smells that referenced a blog about “extreme math”. After getting my fill of COBOL mainframe nostalgia, the example of Muller’s Recurrence got me thinking. The simple claim made in the tweet thread was: Near the end it [the blog] states that no modern language has fixed point, but Raku (formerly Perl6) has a built in rational type which is quite an interesting comparison. It keeps two integers for the numerator and the denominator and no loss of precision occurs. I have also covered some of the benefits of the raku approach to math in a previous blog Machine Math and Raku, often the example given is 0.1 + 0.2 =>0.3 which trips up a lot of languages. I like this example, but I am not entirely convinced by it – sure it can be odd when a programming newbie sees a slightly different result caused by floating point conversions – but it is too mickey mouse to be a serious concern. ## The Muller Extreme Challenge This challenge starts with seemingly innocuous equations and quickly descends into very substantial errors. To quote from the Technical Archaelogist blog: Jean-Michel Muller is a French computer scientist with perhaps the best computer science job in the world. He finds ways to break computers using math. I’m sure he would say he studies reliability and accuracy problems, but no no no: He designs math problems that break computers. One such problem is his recurrence formula. Which looks something like this: That doesn’t look so scary does it? The recurrence problem is useful for our purposes because: • It is straight forward math, no complicated formulas or concepts • We start off with two decimal places, so it’s easy to imagine this happening with a currency calculation. • The error produced is not a slight rounding error but orders of magnitude off. And here’s a quick python script that produces floating point and fixed point versions of Muller’s Recurrence side by side: from decimal import Decimal def rec(y, z): return 108 - ((815-1500/z)/y) def floatpt(N): x = [4, 4.25] for i in range(2, N+1): x.append(rec(x[i-1], x[i-2])) return x def fixedpt(N): x = [Decimal(4), Decimal(17)/Decimal(4)] for i in range(2, N+1): x.append(rec(x[i-1], x[i-2])) return x N = 30 flt = floatpt(N) fxd = fixedpt(N) for i in range(N): print( str(i) + ' | '+str(flt[i])+' | '+str(fxd[I]) ) Which gives us the following output: i | floating pt | fixed pt -- | -------------- | --------------------------- 0 | 4 | 4 1 | 4.25 | 4.25 2 | 4.47058823529 | 4.4705882352941176470588235 3 | 4.64473684211 | 4.6447368421052631578947362 4 | 4.77053824363 | 4.7705382436260623229461618 5 | 4.85570071257 | 4.8557007125890736342039857 6 | 4.91084749866 | 4.9108474990827932004342938 7 | 4.94553739553 | 4.9455374041239167246519529 8 | 4.96696240804 | 4.9669625817627005962571288 9 | 4.98004220429 | 4.9800457013556311118526582 10 | 4.9879092328 | 4.9879794484783912679439415 11 | 4.99136264131 | 4.9927702880620482067468253 12 | 4.96745509555 | 4.9956558915062356478184985 13 | 4.42969049831 | 4.9973912683733697540253088 14 | -7.81723657846 | 4.9984339437852482376781601 15 | 168.939167671 | 4.9990600687785413938424188 16 | 102.039963152 | 4.9994358732880376990501184 17 | 100.099947516 | 4.9996602467866575821700634 18 | 100.004992041 | 4.9997713526716167817979714 19 | 100.000249579 | 4.9993671517118171375788238 20 | 100.00001247862016 | 4.9897059157620938291040004 21 | 100.00000062392161 | 4.7951151851630947311130380 22 | 100.0000000311958 | 0.7281074924258006736651754 23 | 100.00000000155978 | -581.7081261405031229400219627 24 | 100.00000000007799 | 105.8595186892360167901632650 25 | 100.0000000000039 | 100.2767586430669099906187869 26 | 100.0000000000002 | 100.0137997241561168045699158 27 | 100.00000000000001 | 100.0006898905241097140861868 28 | 100.0 | 100.0000344942738135445216746 29 | 100.0 | 100.0000017247126631766583580 30 | 100.0 | 100.0000000862356186943169827 Up until about the 12th iteration the rounding error seems more or less negligible but things quickly go off the rails. Floating point math converges around a number twenty times the value of what the same calculation with fixed point math produces. Least you think it is unlikely that anyone would do a recursive calculation so many times over. This is exactly what happened in 1991 when the Patriot Missile control system miscalculated the time and killed 28 people. And it turns out floating point math has blown lots of stuff up completely by accident. Mark Stadtherr gave an incredible talk about this called High Performance Computing: are we just getting wrong answers faster? You should read it if you want more examples and a more detailed history of the issue than I can offer here. [endquote] So, basically, python Float dies at iteration #12 and python Fixed/Decimal dies at iteration #19. According to the source text COBOL dies at iteration #18. Then the argument focuses on the need for the Decimal library. ## How does raku Measure Up? I do not buy the no loss of precision occurs claim made on twitter beyond the simpler examples, but I do think that Rats should fare well in the face of this kind of challenge. Here’s my code with raku default math: my \N = 30; my \x = []; x[0] = 4; x[1] = 4.25; sub f(\y,\z) { 108 - ( (815 - 1500/z ) / y ) } for 2..N -> \i { x[i] = f(x[i-1],x[i-2]) } for 0..N -> \i { say( i ~ ' | ' ~ x[i] ) } Quick impression is that raku is a little more faithful to the mathematical description and a little less cramped than the python. The raku output gives: 0 | 4 1 | 4.25 2 | 4.470588 3 | 4.644737 4 | 4.770538 5 | 4.855701 6 | 4.910847 7 | 4.945537 8 | 4.9669626 9 | 4.9800457 10 | 4.98797945 11 | 4.992770288 12 | 4.9956558915 13 | 4.9973912684 14 | 4.99843394394 15 | 4.999060071971 16 | 4.999435937147 17 | 4.9996615241038 18 | 4.99979690071342 19 | 4.99987813547793 20 | 4.9999268795046 21 | 4.9999561270611577 22 | 4.99997367600571244 23 | 4.99998420552027271 24 | 4.999990523282227659 25 | 4.9999943139585595936 26 | 4.9999965883712560237 27 | 4.99999795302135690799 28 | 4.999998771812315 29 | 4.99999926308729 30 | 4.999999557853926 So, 30 iterations with no loss of precision – and with the native raku math defaults. Nice! Eventually raku breaks at 34 iterations, so raku:34, python:19. ~p6steve PS. And to reflect the harsh reality of life, Victor Ejikhout’s comment can have the final word: so know your own limits! This is not a problem of fixed point vs floating point. I think your examples favor Fix because you give it more digit of accuracy. What would happen if you used a Float format where the mantissa is equally long as the total Fix length? Objection #2: I think Cobol / Fix would converge away from 5 if you ran more iterations. The Muller equation has three fixed points: x_n==3, x_n==5, and x_n==100. If you start close enough to 5 it will converge there for a while, but (I’m guessing here; didn’t run all the tests) it will converge to the 100 solution. Since you give the float solution less precision it simply converges there faster.The only real lesson here is not to code unstable recursions. ## Pawel bbkr Pabian: Asynchronous, parallel and... dead. My Perl 6 daily bread. ### Published by Pawel bbkr Pabian on 2015-09-06T14:00:56 I love Perl 6 asynchronous features. They are so easy to use and can give instant boost by changing few lines of code that I got addicted to them. I became asynchronous junkie. And finally overdosed. Here is my story... I was processing a document that was divided into chapters, sub-chapters, sub-sub-chapters and so on. Parsed to data structure it looked like this:  my %document = ( '1' => { '1.1' => 'Lorem ipsum', '1.2' => { '1.2.1' => 'Lorem ipsum', '1.2.2' => 'Lorem ipsum' } }, '2' => { '2.1' => { '2.1.1' => 'Lorem ipsum' } } );  Every chapter required processing of its children before it could be processed. Also processing of each chapter was quite time consuming - no matter which level it was and how many children did it have. So I started by writing recursive function to do it:  sub process (%chapters) { for %chapters.kv ->$number, $content { note "Chapter$number started";
&?ROUTINE.($content) if$content ~~ Hash;
sleep 1; # here the chapter itself is processed
note "Chapter $number finished"; } } process(%document);  So nothing fancy here. Maybe except current &?ROUTINE variable which makes recursive code less error prone - there is no need to repeat subroutine name explicitly. After running it I got expected DFS (Depth First Search) flow: $ time perl6 run.pl
Chapter 1 started
Chapter 1.1 started
Chapter 1.1 finished
Chapter 1.2 started
Chapter 1.2.1 started
Chapter 1.2.1 finished
Chapter 1.2.2 started
Chapter 1.2.2 finished
Chapter 1.2 finished
Chapter 1 finished
Chapter 2 started
Chapter 2.1 started
Chapter 2.1.1 started
Chapter 2.1.1 finished
Chapter 2.1 finished
Chapter 2 finished

real    0m8.184s


It worked perfectly, but that was too slow. Because 1 second was required to process each chapter in serial manner it ran for 8 seconds total. So without hesitation I reached for Perl 6 asynchronous goodies to process chapters in parallel.

    sub process (%chapters) {
await do for %chapters.kv -> $number,$content {
start {
note "Chapter $number started"; &?ROUTINE.outer.($content) if $content ~~ Hash; sleep 1; # here the chapter itself is processed note "Chapter$number finished";
}
}
}

process(%document);


Now every chapter is processed asynchronously in parallel and first waits for its children to be also processed asynchronously in parallel. Note that after wrapping processing in await/start construct &?ROUTINE must now point to outer scope.

    $time perl6 run.pl Chapter 1 started Chapter 2 started Chapter 1.1 started Chapter 1.2 started Chapter 2.1 started Chapter 1.2.1 started Chapter 2.1.1 started Chapter 1.2.2 started Chapter 1.1 finished Chapter 1.2.1 finished Chapter 1.2.2 finished Chapter 2.1.1 finished Chapter 2.1 finished Chapter 1.2 finished Chapter 1 finished Chapter 2 finished real 0m3.171s  Perfect. Time dropped to expected 3 seconds - it was not possible to go any faster because document had 3 nesting levels and each required 1 second to process. Still smiling I threw bigger document at my beautiful script - 10 chapters, each with 10 sub-chapters, each with 10 sub-sub-chapters. It started processing, run for a while... and DEADLOCKED. Friedrich Nietzsche said that "when you gaze long into an abyss the abyss also gazes into you". Same rule applies to code. After few minutes me and my code were staring at each other. And I couldn't find why it worked perfectly for small documents but was deadlocking in random moments for big ones. Half an hour later I blinked and got defeated by my own code in staring contest. So it was time for debugging. I noticed that when it was deadlocking there was always constant amount of 16 chapters that were still in progress. And that number looked familiar to me - thread pool! $ perl6 -e 'say start { }'
Promise.new(
uncaught_handler => Callable
),
status => PromiseStatus::Kept
)


Every asynchronous task that is planned needs free thread so it can be executed. And on my system only 16 concurrent threads are allowed as shown above. To analyze what happened let's use document from first example but also assume thread pool is limited to 4:

    $perl6 run.pl # 4 threads available by default Chapter 1 started # 3 threads available Chapter 1.1 started # 2 threads available Chapter 2 started # 1 thread available Chapter 1.1 finished # 2 threads available again Chapter 1.2 started # 1 thread available Chapter 1.2.1 started # 0 threads available # deadlock!  At this moment chapter 1 subtree holds three threads and waits for one more for chapter 1.2.2 to complete everything and start ascending from recursion. And subtree of chapter 2 holds one thread and waits for one more for chapter 2.1 to descend into recursion. In result processing gets to a point where at least one more thread is required to proceed but all threads are taken and none can be returned to thread pool. Script deadlocks and stops here forever. How to solve this problem and maintain parallel processing? There are many ways to do it :) The key to the solution is to process asynchronously only those chapters that do not have unprocessed chapters on lower level. Luckily Perl 6 offers perfect tool - promise junctions. It is possible to create a promise that waits for other promises to be kept and until it happens it is not sent to thread pool for execution. Following code illustrates that:  my$p = Promise.allof( Promise.in(2), Promise.in(3) );
sleep 1;
say "Promise after 1 second: " ~ $p.perl; sleep 3; say "Promise after 4 seconds: " ~$p.perl;


Prints:

    Promise after 1 second: Promise.new(
..., status => PromiseStatus::Planned
)
Promise after 4 seconds: Promise.new(
..., status => PromiseStatus::Kept
)


Let's rewrite processing using this cool property:

    sub process (%chapters) {
return Promise.allof(
do for %chapters.kv -> $number,$content {
my $current = { note "Chapter$number started";
sleep 1; # here the chapter itself is processed
note "Chapter $number finished"; }; if$content ~~ Hash {
Promise.allof( &?ROUTINE.($content) ) .then($current );
}
else {
Promise.start( $current ); } } ); } await process(%document);  It solves the problem when chapter was competing with its sub-chapters for free threads but at the same time it needed those sub-chapters before it can process itself. Now awaiting for sub-chapters to complete does not require free thread. Let's run it: $ perl6 run.pl
Chapter 1.1 started
Chapter 1.2.1 started
Chapter 1.2.2 started
Chapter 2.1.1 started
-
Chapter 1.1 finished
Chapter 1.2.1 finished
Chapter 1.2.2 finished
Chapter 1.2 started
Chapter 2.1.1 finished
Chapter 2.1 started
-
Chapter 1.2 finished
Chapter 1 started
Chapter 2.1 finished
Chapter 2 started
-
Chapter 1 finished
Chapter 2 finished

real    0m3.454s


I've added separator for each second passed so it is easier to understand. When script starts chapters 1.1, 1.2.1, 1.2.2 and 2.1.1 do not have sub-chapters at all. So they can take threads from thread pool immediately. When they are completed after one second then Promises that were awaiting for all of them are kept and chapters 1.2 and 2.1 can be processed safely on thread pool. It keeps going until getting out of recursion.

After trying big document again it was processed flawlessly in 72 seconds instead of linear 1000.

I'm high on asynchronous processing again!

You can download script here and try different data sizes and algorithms for yourself (params are taken from command line).

## 6guts: Towards a new general dispatch mechanism in MoarVM

My goodness, it appears I’m writing my first Raku internals blog post in over two years. Of course, two years ago it wasn’t even called Raku. Anyway, without further ado, let’s get on with this shared brainache.

### What is dispatch?

I use “dispatch” to mean a process by which we take a set of arguments and end up with some action being taken based upon them. Some familiar examples include:

• Making a method call, such as $basket.add($product, $quantity). We might traditionally call just $product and $qauntity the arguments, but for my purposes, all of $basket, the method name 'add'$product, and$quantity are arguments to the dispatch: they are the things we need in order to make a decision about what we’re going to do.
• Making a subroutine call, such as uc($youtube-comment). Since Raku sub calls are lexically resolved, in this case the arguments to the dispatch are &uc (the result of looking up the subroutine) and $youtube-comment.
• Calling a multiple dispatch subroutine or method, where the number and types of the arguments are used in order to decide which of a set of candidates is to be invoked. This process could be seen as taking place “inside” of one of the above two dispatches, given we have both multiple dispatch subroutines and methods in Raku.

At first glance, perhaps the first two seem fairly easy and the third a bit more of a handful – which is sort of true. However, Raku has a number of other features that make dispatch rather more, well, interesting. For example:

• wrap allows us to wrap any Routine (sub or method); the wrapper can then choose to defer to the original routine, either with the original arguments or with new arguments
• When doing multiple dispatch, we may write a proto routine that gets to choose when – or even if – the call to the appropriate candidate is made
• We can use routines like callsame in order to defer to the next candidate in the dispatch. But what does that mean? If we’re in a multiple dispatch, it would mean the next most applicable candidate, if any. If we’re in a method dispatch then it means a method from a base class. (The same thing is used to implement going to the next wrapper or, eventually, to the originally wrapped routine too). And these can be combined: we can wrap a multi method, meaning we can have 3 levels of things that all potentially contribute the next thing to call!

Thanks to this, dispatch – at least in Raku – is not always something we do and produce an outcome, but rather a process that we may be asked to continue with multiple times!

Finally, while the examples I’ve written above can all quite clearly be seen as examples of dispatch, a number of other common constructs in Raku can be expressed as a kind of dispatch too. Assignment is one example: the semantics of it depend on the target of the assignment and the value being assigned, and thus we need to pick the correct semantics. Coercion is another example, and return value type-checking yet another.

### Why does dispatch matter?

Dispatch is everywhere in our programs, quietly tieing together the code that wants stuff done with the code that does stuff. Its ubiquity means it plays a significant role in program performance. In the best case, we can reduce the cost to zero. In the worst case, the cost of the dispatch is high enough to exceed that of the work done as a result of the dispatch.

To a first approximation, when the runtime “understands” the dispatch the performance tends to be at least somewhat decent, but when it doesn’t there’s a high chance of it being awful. Dispatches tend to involve an amount of work that can be cached, often with some cheap guards to verify the validity of the cached outcome. For example, in a method dispatch, naively we need to walk a linearization of the inheritance graph and ask each class we encounter along the way if it has a method of the specified name. Clearly, this is not going to be terribly fast if we do it on every method call. However, a particular method name on a particular type (identified precisely, without regard to subclassing) will resolve to the same method each time. Thus, we can cache the outcome of the lookup, and use it whenever the type of the invocant matches that used to produce the cached result.

### Specialized vs. generalized mechanisms in language runtimes

When one starts building a runtime aimed at a particular language, and has to do it on a pretty tight budget, the most obvious way to get somewhat tolerable performance is to bake various hot-path language semantics into the runtime. This is exactly how MoarVM started out. Thus, if we look at MoarVM as it stood several years ago, we find things like:

• Some support for method caching
• A multi-dispatch cache highly tied to Raku’s multi-dispatch semantics, and only really able to help when the dispatch is all about nominal types (so using where comes at a very high cost)
• A mechanism for specifying how to find the actual code handle inside of a wrapping code object (for example, a Sub object has a private attribute in it that holds the low-level code handle identifying the bytecode to run)
• Some limited attempts to allow us to optimize correctly in the case we know that a dispatch will not be continued – which requires careful cooperation between compiler and runtime (or less diplomatically, it’s all a big hack)

These are all still there today, however are also all on the way out. What’s most telling about this list is what isn’t included. Things like:

• Private method calls, which would need a different cache – but the initial VM design limited us to one per type
• Qualified method calls ($obj.SomeType::method-name()) • Ways to decently optimize dispatch resumption A few years back I started to partially address this, with the introduction of a mechanism I called “specializer plugins”. But first, what is the specializer? When MoarVM started out, it was a relatively straightforward interpreter of bytecode. It only had to be fast enough to beat the Parrot VM in order to get a decent amount of usage, which I saw as important to have before going on to implement some more interesting optimizations (back then we didn’t have the kind of pre-release automated testing infrastructure we have today, and so depended much more on feedback from early adopters). Anyway, soon after being able to run pretty much as much of the Raku language as any other backend, I started on the dynamic optimizer. It gathered type statistics as the program was interpreted, identified hot code, put it into SSA form, used the type statistics to insert guards, used those together with static properties of the bytecode to analyze and optimize, and produced specialized bytecode for the function in question. This bytecode could elide type checks and various lookups, as well as using a range of internal ops that make all kinds of assumptions, which were safe because of the program properties that were proved by the optimizer. This is called specialized bytecode because it has had a lot of its genericity – which would allow it to work correctly on all types of value that we might encounter – removed, in favor of working in a particular special case that actually occurs at runtime. (Code, especially in more dynamic languages, is generally far more generic in theory than it ever turns out to be in practice.) This component – the specializer, known internally as “spesh” – delivered a significant further improvement in the performance of Raku programs, and with time its sophistication has grown, taking in optimizations such as inlining and escape analysis with scalar replacement. These aren’t easy things to build – but once a runtime has them, they create design possibilities that didn’t previously exist, and make decisions made in their absence look sub-optimal. Of note, those special-cased language-specific mechanisms, baked into the runtime to get some speed in the early days, instead become something of a liability and a bottleneck. They have complex semantics, which means they are either opaque to the optimizer (so it can’t reason about them, meaning optimization is inhibited) or they need special casing in the optimizer (a liability). So, back to specializer plugins. I reached a point where I wanted to take on the performance of things like $obj.?meth (the “call me maybe” dispatch), $obj.SomeType::meth() (dispatch qualified with a class to start looking in), and private method calls in roles (which can’t be resolved statically). At the same time, I was getting ready to implement some amount of escape analysis, but realized that it was going to be of very limited utility because assignment had also been special-cased in the VM, with a chunk of opaque C code doing the hot path stuff. But why did we have the C code doing that hot-path stuff? Well, because it’d be too espensive to have every assignment call a VM-level function that does a bunch of checks and logic. Why is that costly? Because of function call overhead and the costs of interpretation. This was all true once upon a time. But, some years of development later: • Inlining was implemented, and could eliminate the overhead of doing a function call • We could compile to machine code, eliminating interpretation overhead • We were in a position where we had type information to hand in the specializer that would let us eliminate branches in the C code, but since it was just an opaque function we called, there was no way to take this opportunity I solved the assignment problem and the dispatch problems mentioned above with the introduction of a single new mechanism: specializer plugins. They work as follows: • The first time we reach a given callsite in the bytecode, we run the plugin. It produces a code object to invoke, along with a set of guards (conditions that have to be met in order to use that code object result) • The next time we reach it, we check if the guards are met, and if so, just use the result • If not, we run the plugin again, and stack up a guard set at the callsite • We keep statistics on how often a given guard set succeeds, and then use that in the specializer The vast majority of cases are monomorphic, meaning that only one set of guards are produced and they always succeed thereafter. The specializer can thus compile those guards into the specialized bytecode and then assume the given target invocant is what will be invoked. (Further, duplicate guards can be eliminated, so the guards a particular plugin introduces may reduce to zero.) Specializer plugins felt pretty great. One new mechanism solved multiple optimization headaches. The new MoarVM dispatch mechanism is the answer to a fairly simple question: what if we get rid of all the dispatch-related special-case mechanisms in favor of something a bit like specializer plugins? The resulting mechanism would need to be a more powerful than specializer plugins. Further, I could learn from some of the shortcomings of specializer plugins. Thus, while they will go away after a relatively short lifetime, I think it’s fair to say that I would not have been in a place to design the new MoarVM dispatch mechanism without that experience. ### The dispatch op and the bootstrap dispatchers All the method caching. All the multi dispatch caching. All the specializer plugins. All the invocation protocol stuff for unwrapping the bytecode handle in a code object. It’s all going away, in favor of a single new dispatch instruction. Its name is, boringly enough, dispatch. It looks like this: dispatch_o result, 'dispatcher-name', callsite, arg0, arg1, ..., argN  Which means: • Use the dispatcher called dispatcher-name • Give it the argument registers specified (the callsite referenced indicates the number of arguments) • Put the object result of the dispatch into the register result (Aside: this implies a new calling convention, whereby we no longer copy the arguments into an argument buffer, but instead pass the base of the register set and a pointer into the bytecode where the register argument map is found, and then do a lookup registers[map[argument_index]] to get the value for an argument. That alone is a saving when we interpret, because we no longer need a loop around the interpreter per argument.) Some of the arguments might be things we’d traditionally call arguments. Some are aimed at the dispatch process itself. It doesn’t really matter – but it is more optimal if we arrange to put arguments that are only for the dispatch first (for example, the method name), and those for the target of the dispatch afterwards (for example, the method parameters). The new bootstrap mechanism provides a small number of built-in dispatchers, whose names start with “boot-“. They are: • boot-value– take the first argument and use it as the result (the identity function, except discarding any further arguments) • boot-constant – take the first argument and produce it as the result, but also treat it as a constant value that will always be produced (thus meaning the optimizer could consider any pure code used to calculate the value as dead) • boot-code – take the first argument, which must be a VM bytecode handle, and run that bytecode, passing the rest of the arguments as its parameters; evaluate to the return value of the bytecode • boot-syscall – treat the first argument as the name of a VM-provided built-in operation, and call it, providing the remaining arguments as its parameters • boot-resume – resume the topmost ongoing dispatch That’s pretty much it. Every dispatcher we build, to teach the runtime about some other kind of dispatch behavior, eventually terminates in one of these. ### Building on the bootstrap Teaching MoarVM about different kinds of dispatch is done using nothing less than the dispatch mechanism itself! For the most part, boot-syscall is used in order to register a dispatcher, set up the guards, and provide the result that goes with them. Here is a minimal example, taken from the dispatcher test suite, showing how a dispatcher that provides the identity function would look: nqp::dispatch('boot-syscall', 'dispatcher-register', 'identity', ->$capture {
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-value', $capture); }); sub identity($x) {
nqp::dispatch('identity', $x) } ok(identity(42) == 42, 'Can define identity dispatch (1)'); ok(identity('foo') eq 'foo', 'Can define identity dispatch (2)');  In the first statement, we call the dispatcher-register MoarVM system call, passing a name for the dispatcher along with a closure, which will be called each time we need to handle the dispatch (which I tend to refer to as the “dispatch callback”). It receives a single argument, which is a capture of arguments (not actually a Raku-level Capture, but the idea – an object containing a set of call arguments – is the same). Every user-defined dispatcher should eventually use dispatcher-delegate in order to identify another dispatcher to pass control along to. In this case, it delegates immediately to boot-value – meaning it really is nothing except a wrapper around the boot-value built-in dispatcher. The sub identity contains a single static occurrence of the dispatch op. Given we call the sub twice, we will encounter this op twice at runtime, but the two times are very different. The first time is the “record” phase. The arguments are formed into a capture and the callback runs, which in turn passes it along to the boot-value dispatcher, which produces the result. This results in an extremely simple dispatch program, which says that the result should be the first argument in the capture. Since there’s no guards, this will always be a valid result. The second time we encounter the dispatch op, it already has a dispatch program recorded there, so we are in run mode. Turning on a debugging mode in the MoarVM source, we can see the dispatch program that results looks like this: Dispatch program (1 temporaries) Ops: Load argument 0 into temporary 0 Set result object value from temporary 0  That is, it reads argument 0 into a temporary location and then sets that as the result of the dispatch. Notice how there is no mention of the fact that we went through an extra layer of dispatch; those have zero cost in the resulting dispatch program. ### Capture manipulation Argument captures are immutable. Various VM syscalls exist to transform them into new argument captures with some tweak, for example dropping or inserting arguments. Here’s a further example from the test suite: nqp::dispatch('boot-syscall', 'dispatcher-register', 'drop-first', ->$capture {
my $capture-derived := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',$capture, 0);
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-value', $capture-derived); }); ok(nqp::dispatch('drop-first', 'first', 'second') eq 'second', 'dispatcher-drop-arg works');  This drops the first argument before passing the capture on to the boot-value dispatcher – meaning that it will return the second argument. Glance back at the previous dispatch program for the identity function. Can you guess how this one will look? Well, here it is: Dispatch program (1 temporaries) Ops: Load argument 1 into temporary 0 Set result string value from temporary 0  Again, while in the record phase of such a dispatcher we really do create capture objects and make a dispatcher delegation, the resulting dispatch program is far simpler. Here’s a slightly more involved example: my$target := -> $x {$x + 1 }
nqp::dispatch('boot-syscall', 'dispatcher-register', 'call-on-target', -> $capture { my$capture-derived := nqp::dispatch('boot-syscall',
'dispatcher-insert-arg-literal-obj', $capture, 0,$target);
nqp::dispatch('boot-syscall', 'dispatcher-delegate',
'boot-code-constant', $capture-derived); }); sub cot() { nqp::dispatch('call-on-target', 49) } ok(cot() == 50, 'dispatcher-insert-arg-literal-obj works at start of capture'); ok(cot() == 50, 'dispatcher-insert-arg-literal-obj works at start of capture after link too');  Here, we have a closure stored in a variable $target. We insert it as the first argument of the capture, and then delegate to boot-code-constant, which will invoke that code object and pass the other dispatch arguments to it. Once again, at the record phase, we really do something like:

• Create a new capture with a code object inserted at the start
• Delegate to the boot code constant dispatcher, which…
• …creates a new capture without the original argument and runs bytecode with those arguments

And the resulting dispatch program? It’s this:

Dispatch program (1 temporaries)
Ops:
Load collectable constant at index 0 into temporary 0
Skip first 0 args of incoming capture; callsite from 0
Invoke MVMCode in temporary 0



That is, load the constant bytecode handle that we’re going to invoke, set up the args (which are in this case equal to those of the incoming capture), and then invoke the bytecode with those arguments. The argument shuffling is, once again, gone. In general, whenever the arguments we do an eventual bytecode invocation with are a tail of the initial dispatch arguments, the arguments transform becomes no more than a pointer addition.

### Guards

All of the dispatch programs seen so far have been unconditional: once recorded at a given callsite, they shall always be used. The big missing piece to make such a mechanism have practical utility is guards. Guards assert properties such as the type of an argument or if the argument is definite (Int:D) or not (Int:U).

Here’s a somewhat longer test case, with some explanations placed throughout it.

# A couple of classes for test purposes
my class C1 { }
my class C2 { }

# A counter used to make sure we're only invokving the dispatch callback as
# many times as we expect.
my $count := 0; # A type-name dispatcher that maps a type into a constant string value that # is its name. This isn't terribly useful, but it is a decent small example. nqp::dispatch('boot-syscall', 'dispatcher-register', 'type-name', ->$capture {
# Bump the counter, just for testing purposes.
$count++; # Obtain the value of the argument from the capture (using an existing # MoarVM op, though in the future this may go away in place of a syscall) # and then obtain the string typename also. my$arg-val := nqp::captureposarg($capture, 0); my str$name := $arg-val.HOW.name($arg-val);

# This outcome is only going to be valid for a particular type. We track
# the argument (which gives us an object back that we can use to guard
# it) and then add the type guard.
my $arg := nqp::dispatch('boot-syscall', 'dispatcher-track-arg',$capture, 0);
nqp::dispatch('boot-syscall', 'dispatcher-guard-type', $arg); # Finally, insert the type name at the start of the capture and then # delegate to the boot-constant dispatcher. nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-constant', nqp::dispatch('boot-syscall', 'dispatcher-insert-arg-literal-str',$capture, 0, $name)); }); # A use of the dispatch for the tests. Put into a sub so there's a single # static dispatch op, which all dispatch programs will hang off. sub type-name($obj) {
nqp::dispatch('type-name', $obj) } # Check with the first type, making sure the guard matches when it should # (although this test would pass if the guard were ignored too). ok(type-name(C1) eq 'C1', 'Dispatcher setting guard works'); ok($count == 1, 'Dispatch callback ran once');
ok(type-name(C1) eq 'C1', 'Can use it another time with the same type');
ok($count == 1, 'Dispatch callback was not run again'); # Test it with a second type, both record and run modes. This ensures the # guard really is being checked. ok(type-name(C2) eq 'C2', 'Can handle polymorphic sites when guard fails'); ok($count == 2, 'Dispatch callback ran a second time for new type');
ok(type-name(C2) eq 'C2', 'Second call with new type works');

# Check that we can use it with the original type too, and it has stacked
# the dispatch programs up at the same callsite.
ok(type-name(C1) eq 'C1', 'Call with original type still works');
ok($count == 2, 'Dispatch callback only ran a total of 2 times');  This time two dispatch programs get produced, one for C1: Dispatch program (1 temporaries) Ops: Guard arg 0 (type=C1) Load collectable constant at index 1 into temporary 0 Set result string value from temporary 0  And another for C2: Dispatch program (1 temporaries) Ops: Guard arg 0 (type=C2) Load collectable constant at index 1 into temporary 0 Set result string value from temporary 0  Once again, no leftovers from capture manipulation, tracking, or dispatcher delegation; the dispatch program does a type guard against an argument, then produces the result string. The whole call to $arg-val.HOW.name($arg-val) is elided, the dispatcher we wrote encoding the knowledge – in a way that the VM can understand – that a type’s name can be considered immutable. This example is a bit contrived, but now consider that we instead look up a method and guard on the invocant type: that’s a method cache! Guard the types of more of the arguments, and we have a multi cache! Do both, and we have a multi-method cache. The latter is interesting in so far as both the method dispatch and the multi dispatch want to guard on the invocant. In fact, in MoarVM today there will be two such type tests until we get to the point where the specializer does its work and eliminates these duplicated guards. However, the new dispatcher does not treat the dispatcher-guard-type as a kind of imperative operation that writes a guard into the resultant dispatch program. Instead, it declares that the argument in question must be guarded. If some other dispatcher already did that, it’s idempotent. The guards are emitted once all dispatch programs we delegate through, on the path to a final outcome, have had their say. Fun aside: those being especially attentive will have noticed that the dispatch mechanism is used as part of implementing new dispatchers too, and indeed, this ultimately will mean that the specializer can specialize the dispatchers and have them JIT-compiled into something more efficient too. After all, from the perspective of MoarVM, it’s all just bytecode to run; it’s just that some of it is bytecode that tells the VM how to execute Raku programs more efficiently! ### Dispatch resumption A resumable dispatcher needs to do two things: 1. Provide a resume callback as well as a dispatch one when registering the dispatcher 2. In the dispatch callback, specify a capture, which will form the resume initialization state When a resumption happens, the resume callback will be called, with any arguments for the resumption. It can also obtain the resume initialization state that was set in the dispatch callback. The resume initialization state contains the things needed in order to continue with the dispatch the first time it is resumed. We’ll take a look at how this works for method dispatch to see a concrete example. I’ll also, at this point, switch to looking at the real Rakudo dispatchers, rather than simplified test cases. The Rakudo dispatchers take advantage of delegation, duplicate guards, and capture manipulations all having no runtime cost in the resulting dispatch program to, in my mind at least, quite nicely factor what is a somewhat involved dispatch process. There are multiple entry points to method dispatch: the normal boring $obj.meth(), the qualified $obj.Type::meth(), and the call me maybe $obj.?meth(). These have common resumption semantics – or at least, they can be made to provided we always carry a starting type in the resume initialization state, which is the type of the object that we do the method dispatch on.

Here is the entry point to dispatch for a normal method dispatch, with the boring details of reporting missing method errors stripped out.

# A standard method call of the form $obj.meth($arg); also used for the
# indirect form $obj."$name"($arg). It receives the decontainerized invocant, # the method name, and the the args (starting with the invocant including any # container). nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-meth-call', ->$capture {
# Try to resolve the method call using the MOP.
my $obj := nqp::captureposarg($capture, 0);
my str $name := nqp::captureposarg_s($capture, 1);
my $meth :=$obj.HOW.find_method($obj,$name);

# Report an error if there is no such method.
unless nqp::isconcrete($meth) { !!! 'Error reporting logic elided for brevity'; } # Establish a guard on the invocant type and method name (however the name # may well be a literal, in which case this is free). nqp::dispatch('boot-syscall', 'dispatcher-guard-type', nqp::dispatch('boot-syscall', 'dispatcher-track-arg',$capture, 0));
nqp::dispatch('boot-syscall', 'dispatcher-guard-literal',
nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $capture, 1)); # Add the resolved method and delegate to the resolved method dispatcher. my$capture-delegate := nqp::dispatch('boot-syscall',
'dispatcher-insert-arg-literal-obj', $capture, 0,$meth);
nqp::dispatch('boot-syscall', 'dispatcher-delegate',
'raku-meth-call-resolved', $capture-delegate); });  Now for the resolved method dispatcher, which is where the resumption is handled. First, let’s look at the normal dispatch callback (the resumption callback is included but empty; I’ll show it a little later). # Resolved method call dispatcher. This is used to call a method, once we have # already resolved it to a callee. Its first arg is the callee, the second and # third are the type and name (used in deferral), and the rest are the args to # the method. nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-meth-call-resolved', # Initial dispatch ->$capture {
# Save dispatch state for resumption. We don't need the method that will
# be called now, so drop it.
my $resume-capture := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',$capture, 0);
nqp::dispatch('boot-syscall', 'dispatcher-set-resume-init-args', $resume-capture); # Drop the dispatch start type and name, and delegate to multi-dispatch or # just invoke if it's single dispatch. my$delegate_capture := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',
nqp::dispatch('boot-syscall', 'dispatcher-drop-arg', $capture, 1), 1); my$method := nqp::captureposarg($delegate_capture, 0); if nqp::istype($method, Routine) && $method.is_dispatcher { nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-multi',$delegate_capture);
}
else {
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-invoke', $delegate_capture); } }, # Resumption ->$capture {
... 'Will be shown later';
});



There’s an arguable cheat in raku-meth-call: it doesn’t actually insert the type object of the invocant in place of the invocant. It turns out that it doesn’t really matter. Otherwise, I think the comments (which are to be found in the real implementation also) tell the story pretty well.

One important point that may not be clear – but follows a repeating theme – is that the setting of the resume initialization state is also more of a declarative rather than an imperative thing: there isn’t a runtime cost at the time of the dispatch, but rather we keep enough information around in order to be able to reconstruct the resume initialization state at the point we need it. (In fact, when we are in the run phase of a resume, we don’t even have to reconstruct it in the sense of creating a capture object.)

Now for the resumption. I’m going to present a heavily stripped down version that only deals with the callsame semantics (the full thing has to deal with such delights as lastcall and nextcallee too). The resume initialization state exists to seed the resumption process. Once we know we actually do have to deal with resumption, we can do things like calculating the full list of methods in the inheritance graph that we want to walk through. Each resumable dispatcher gets a single storage slot on the call stack that it can use for its state. It can initialize this in the first step of resumption, and then update it as we go. Or more precisely, it can set up a dispatch program that will do this when run.

A linked list turns out to be a very convenient data structure for the chain of candidates we will walk through. We can work our way through a linked list by keeping track of the current node, meaning that there need only be a single thing that mutates, which is the current state of the dispatch. The dispatch program mechanism also provides a way to read an attribute from an object, and that is enough to express traversing a linked list into the dispatch program. This also means zero allocations.

So, without further ado, here is the linked list (rather less pretty in NQP, the restricted Raku subset, than it would be in full Raku):

# A linked list is used to model the state of a dispatch that is deferring
# through a set of methods, multi candidates, or wrappers. The Exhausted class
# is used as a sentinel for the end of the chain. The current state of the
# dispatch points into the linked list at the appropriate point; the chain
# itself is immutable, and shared over (runtime) dispatches.
my class DeferralChain {
has $!code; has$!next;
method new($code,$next) {
my $obj := nqp::create(self); nqp::bindattr($obj, DeferralChain, '$!code',$code);
nqp::bindattr($obj, DeferralChain, '$!next', $next);$obj
}
method code() { $!code } method next() {$!next }
};
my class Exhausted {};



And finally, the resumption handling.

nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-meth-call-resolved',
# Initial dispatch
-> $capture { ... 'Presented earlier; }, # Resumption. The resume init capture's first two arguments are the type # that we initially did a method dispatch against and the method name # respectively. ->$capture {
# Work out the next method to call, if any. This depends on if we have
# an existing dispatch state (that is, a method deferral is already in
# progress).
my $init := nqp::dispatch('boot-syscall', 'dispatcher-get-resume-init-args'); my$state := nqp::dispatch('boot-syscall', 'dispatcher-get-resume-state');
my $next_method; if nqp::isnull($state) {
# No state, so just starting the resumption. Guard on the
# invocant type and name.
my $track_start_type := nqp::dispatch('boot-syscall', 'dispatcher-track-arg',$init, 0);
nqp::dispatch('boot-syscall', 'dispatcher-guard-type', $track_start_type); my$track_name := nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $init, 1); nqp::dispatch('boot-syscall', 'dispatcher-guard-literal',$track_name);

# Also guard on there being no dispatch state.
my $track_state := nqp::dispatch('boot-syscall', 'dispatcher-track-resume-state'); nqp::dispatch('boot-syscall', 'dispatcher-guard-literal',$track_state);

# Build up the list of methods to defer through.
my $start_type := nqp::captureposarg($init, 0);
my str $name := nqp::captureposarg_s($init, 1);
my @mro := nqp::can($start_type.HOW, 'mro_unhidden') ??$start_type.HOW.mro_unhidden($start_type) !!$start_type.HOW.mro($start_type); my @methods; for @mro { my %mt := nqp::hllize($_.HOW.method_table($_)); if nqp::existskey(%mt,$name) {
@methods.push(%mt{$name}); } } # If there's nothing to defer to, we'll evaluate to Nil (just don't set # the next method, and it happens below). if nqp::elems(@methods) >= 2 { # We can defer. Populate next method. @methods.shift; # Discard the first one, which we initially called$next_method := @methods.shift; # The immediate next one

# Build chain of further methods and set it as the state.
my $chain := Exhausted; while @methods {$chain := DeferralChain.new(@methods.pop, $chain); } nqp::dispatch('boot-syscall', 'dispatcher-set-resume-state-literal',$chain);
}
}
elsif !nqp::istype($state, Exhausted) { # Already working through a chain of method deferrals. Obtain # the tracking object for the dispatch state, and guard against # the next code object to run. my$track_state := nqp::dispatch('boot-syscall', 'dispatcher-track-resume-state');
my $track_method := nqp::dispatch('boot-syscall', 'dispatcher-track-attr',$track_state, DeferralChain, '$!code'); nqp::dispatch('boot-syscall', 'dispatcher-guard-literal',$track_method);

# Update dispatch state to point to next method.
my $track_next := nqp::dispatch('boot-syscall', 'dispatcher-track-attr',$track_state, DeferralChain, '$!next'); nqp::dispatch('boot-syscall', 'dispatcher-set-resume-state',$track_next);

# Set next method, which we shall defer to.
$next_method :=$state.code;
}
else {
# Dispatch already exhausted; guard on that and fall through to returning
# Nil.
my $track_state := nqp::dispatch('boot-syscall', 'dispatcher-track-resume-state'); nqp::dispatch('boot-syscall', 'dispatcher-guard-literal',$track_state);
}

# If we found a next method...
if nqp::isconcrete($next_method) { # Call with same (that is, original) arguments. Invoke with those. # We drop the first two arguments (which are only there for the # resumption), add the code object to invoke, and then leave it # to the invoke or multi dispatcher. my$just_args := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',
nqp::dispatch('boot-syscall', 'dispatcher-drop-arg', $init, 0), 0); my$delegate_capture := nqp::dispatch('boot-syscall',
'dispatcher-insert-arg-literal-obj', $just_args, 0,$next_method);
if nqp::istype($next_method, Routine) &&$next_method.is_dispatcher {
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-multi',
$delegate_capture); } else { nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-invoke',$delegate_capture);
}
}
else {
# No method, so evaluate to Nil (boot-constant disregards all but
# the first argument).
nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-constant',
nqp::dispatch('boot-syscall', 'dispatcher-insert-arg-literal-obj',
$capture, 0, Nil)); } });  That’s quite a bit to take in, and quite a bit of code. Remember, however, that this is only run for the record phase of a dispatch resumption. It also produces a dispatch program at the callsite of the callsame, with the usual guards and outcome. Implicit guards are created for the dispatcher that we are resuming at that point. In the most common case this will end up monomorphic or bimorphic, although situations involving nestings of multiple dispatch or method dispatch could produce a more morphic callsite. The design I’ve picked forces resume callbacks to deal with two situations: the first resumption and the latter resumptions. This is not ideal in a couple of ways: 1. It’s a bit inconvenient for those writing dispatch resume callbacks. However, it’s not like this is a particularly common activity! 2. The difference results in two dispatch programs being stacked up at a callsite that might otherwise get just one Only the second of these really matters. The reason for the non-uniformity is to make sure that the overwhelming majority of calls, which never lead to a dispatch resumption, incur no per-dispatch cost for a feature that they never end up using. If the result is a little more cost for those using the feature, so be it. In fact, early benchmarking shows callsame with wrap and method calls seems to be up to 10 times faster using the new dispatcher than in current Rakudo, and that’s before the specializer understands enough about it to improve things further! ### What’s done so far Everything I’ve discussed above is implemented, except that I may have given the impression somewhere that multiple dispatch is fully implemented using the new dispatcher, and that is not the case yet (no handling of where clauses and no dispatch resumption support). ### Next steps Getting the missing bits of multiple dispatch fully implemented is the obvious next step. The other missing semantic piece is support for callwith and nextwith, where we wish to change the arguments that are being used when moving to the next candidate. A few other minor bits aside, that in theory will get all of the Raku dispatch semantics at least supported. Currently, all standard method calls ($obj.meth()) and other calls (foo() and $foo()) go via the existing dispatch mechanism, not the new dispatcher. Those will need to be migrated to use the new dispatcher also, and any bugs that are uncovered will need fixing. That will get things to the point where the new dispatcher is semantically ready. After that comes performance work: making sure that the specializer is able to deal with dispatch program guards and outcomes. The goal, initially, is to get steady state performance of common calling forms to perform at least as well as in the current master branch of Rakudo. It’s already clear enough there will be some big wins for some things that to date have been glacial, but it should not come at the cost of regression on the most common kinds of dispatch, which have received plenty of optimization effort before now. Furthermore, NQP – the restricted form of Raku that the Rakudo compiler and other bits of the runtime guts are written in – also needs to be migrated to use the new dispatcher. Only when that is done will it be possible to rip out the current method cache, multiple dispatch cache, and so forth from MoarVM. An open question is how to deal with backends other than MoarVM. Ideally, the new dispatch mechanism will be ported to those. A decent amount of it should be possible to express in terms of the JVM’s invokedynamic (and this would all probably play quite well with a Truffle-based Raku implementation, although I’m not sure there is a current active effort in that area). ### Future opportunities While my current focus is to ship a Rakudo and MoarVM release that uses the new dispatcher mechanism, that won’t be the end of the journey. Some immediate ideas: • Method calls on roles need to pun the role into a class, and so method lookup returns a closure that does that and replaces the invocant. That’s a lot of indirection; the new dispatcher could obtain the pun and produce a dispatch program that replaces the role type object with the punned class type object, which would make the per-call cost far lower. • I expect both the handles (delegation) and FALLBACK (handling missing method call) mechanisms can be made to perform better using the new dispatcher • The current implementation of assuming – used to curry or otherwise prime arguments for a routine – is not ideal, and an implementation that takes advantage of the argument rewriting capabilities of the new dispatcher would likely perform a great deal better Some new language features may also be possible to provide in an efficient way with the help of the new dispatch mechanism. For example, there’s currently not a reliable way to try to invoke a piece of code, just run it if the signature binds, or to do something else if it doesn’t. Instead, things like the Cro router have to first do a trial bind of the signature, and then do the invoke, which makes routing rather more costly. There’s also the long suggested idea of providing pattern matching via signatures with the when construct (for example, when * -> ($x) {}; when * -> ($x, *@tail) { }), which is pretty much the same need, just in a less dynamic setting. ### In closing… Working on the new dispatch mechanism has been a longer journey than I first expected. The resumption part of the design was especially challenging, and there’s still a few important details to attend to there. Something like four potential approaches were discarded along the way (although elements of all of them influenced what I’ve described in this post). Abstractions that hold up are really, really, hard. I also ended up having to take a couple of months away from doing Raku work at all, felt a bit crushed during some others, and have been juggling this with the equally important RakuAST project (which will be simplified by being able to assume the presence of the new dispatcher, and also offers me a range of softer Raku hacking tasks, whereas the dispatcher work offers few easy pickings). Given all that, I’m glad to finally be seeing the light at the end of the tunnel. The work that remains is enumerable, and the day we ship a Rakudo and MoarVM release using the new dispatcher feels a small number of months away (and I hope writing that is not tempting fate!) The new dispatcher is probably the most significant change to MoarVM since I founded it, in so far as it sees us removing a bunch of things that have been there pretty much since the start. RakuAST will also deliver the greatest architectural change to the Rakudo compiler in a decade. Both are an opportunity to fold years of learning things the hard way into the runtime and compiler. I hope when I look back at it all in another decade’s time, I’ll at least feel I made more interesting mistakes this time around. ## brrt to the future: Why bother with Scripting? ### Published by Bart Wiegmans on 2021-03-14T14:33:00 Many years back, Larry Wall shared his thesis on the nature of scripting. Since recently even Java gained 'script' support I thought it would be fitting to revisit the topic, and hopefully relevant to the perl and raku language community. The weakness of Larry's treatment (which, to be fair to the author, I think is more intended to be enlightening than to be complete) is the contrast of scripting with programming. This contrast does not permit a clear separation because scripts are programs. That is to say, no matter how long or short, scripts are written commands for a machine to execute, and I think that's a pretty decent definition of a program in general. A more useful contrast - and, I think, the intended one - is between scripts and other sorts of programs, because that allows us to compare scripting (writing scripts) with 'programming' (writing non-script programs). And to do that we need to know what other sorts of programs there are. The short version of that answer is - systems and applications, and a bunch of other things that aren't really relevant to the working programmer, like (embedded) control algorithms, spreadsheets and database queries. (The definition I provided above is very broad, by design, because I don't want to get stuck on boundary questions). Most programmers write applications, some write systems, virtually all write scripts once in a while, though plenty of people who aren't professional programmers also write scripts. I think the defining features of applications and systems are, respectively: • Applications present models to users (for manipulation) • Systems provide functionality to other programs Consider for instance a mail client (like thunderbird) in comparison to a mailer daemon (like sendmail) - one provides an interface to read and write e-mails (the model) and the other provides functionality to send that e-mail to other servers. Note that under this (again, broad) definition, libraries are also system software, which makes sense, considering that their users are developers (just as for, say, PostgreSQL) who care about things like performance, reliability, and correctness. Incidentally, libraries as well as 'typical' system software (such as database engines and operating system kernels) tend to be written in languages like C and C++ for much the same reasons. What then, are the differences between scripts, applications, and systems? I think the following is a good list: • Scripts tend to be short, applications in particular can grow very large. • Scripts tend to be ad-hoc (written for a specific need), applications and systems tend to be designed for a range of use cases. (Very common example: build scripts) • Scripts tend to run only in a specific environment; in contrast, many applications are designed for a range of devices/clients; many systems have specific requirements but the intention is that they can be setup in multiple distinct environments. • Because scripts are ad-hoc, short, and environment-dependent, many of software engineering standard best practices don't really apply (and are in fact often disregarded). Obviously these distinctions aren't really binary - 'short' versus 'long', 'ad-hoc' versus 'general purpose' - and can't be used to conclusively settle the question whether something is a script or an application. (If, indeed, that question ever comes up). More important is that for the 10 or so scripts I've written over the past year - some professionally, some not - all or most of these properties held, and I'd be surprised if the same isn't true for most readers. And - finally coming at the point that I'm trying to make today - these features point to a specific niche of programs more than to a specific technology (or set of technologies). To be exact, scripts are (mostly) short, custom programs to automate ad-hoc tasks, tasks that are either to specific or too small to develop and distribute another program for. This has further implications on the preferred features of a scripting language (taken to mean, a language designed to enable the development of scripts). In particular: • It should make programs concise. The economic rationalization is that the total expected lifetime value of a script, being ad-hoc and context-dependent, is not very great, so writing it should be cheap, which implies that the script should be short). • Related to this, the value provided by type systems is generally less than in larger (application) programs, and the value of extensive modelling features (class hierarchies) is similarly low, so many scripting languages have very weak type systems and data modelling features, if they have them at all. • Interoperation with the environment is on the other hand very important, so I/O features tend to be well-developed. (Contrast C, in which I/O is entirely an afterthought provided by a library). • It is acceptable to depend on a local environment in implicit ways, since that's what you are going to do anyway. • It is acceptable to warn on a condition that might've been a fatal error in another programming language. • In fact, I think that concerns of correctness are often different, meaning relaxed, compared to applications, again because scripters don't necessarily expect their scripts to run on every environment and with every possible input. As an example of the last point - Python 3 requires users to be exact about the encoding of their input, causing all sorts of trouble for unsuspecting scripters when they accidentally try to read ISO-8551 data as UTF-8, or vice versa. Python 2 did not, and for most scripts - not applications - I actually think that is the right choice. This niche doesn't always exist. In computing environments where everything of interest is adequately captured by an application, or which lacks the ability to effectively automate ad-hoc tasks (I'm thinking in particular of Windows before PowerShell), the practice of scripting tends to not develop. Similarily, in a modern 'cloud' environment, where system setup is controlled by a state machine hosted by another organization, scripting doesn't really have much of a future. To put it another way, scripting only thrives in an environment that has a lot of 'scriptable' tasks; meaning tasks for which there isn't already a pre-made solution available, environments that have powerful facilities available for a script to access, and whose users are empowered to automate those tasks. Such qualities are common on Unix/Linux 'workstations' but rather less so on smartphones and (as noted before) cloud computing environments. Truth be told I'm a little worried about that development. I could point to, and expound on, the development and popularity of languages like go and rust, which aren't exactly scripting languages, or the replacement of Javascript with TypeScript, to make the point further, but I don't think that's necessary. At the same time I could point to the development of data science as a discipline to demonstrate that scripting is alive and well (and indeed perhaps more economically relevant than before). What should be the conclusion for perl 5/7 and raku? I'm not quite sure, mostly because I'm not quite sure whether the broader perl/raku community would prefer their sister languages to be scripting or application languages. (As implied above, I think the Python community chose that they wanted Python 3 to be an application language, and this was not without consequences to their users). Raku adds a number of features common to application languages (I'm thinking of it's powerful type system in particular), continuing a trend that perl 5 arguably pioneered. This is indeed a very powerful strategy - a language can be introduced for scripts and some of those scripts are then extended into applications (or even systems), thereby ensuring its continued usage. But for it to work, a new perl family language must be introduced on its scripting merits, and there must be a plentiful supply of scriptable tasks to automate, some of which - or a combination of which - grow into an application. For myself, I would like to see scripting have a bright future. Not just because scripting is the most accessible form of programming, but also because an environment that permits, even requires scripting, is one were not all interesting problems have been solved, one where it's users ask it to do tasks so diverse that there isn't an app for that, yet. One where the true potential of the wonderful devices that surround is can be explored. In such a world there might well be a bright future for scripting. ## Andrew Shitov: Computing factorials using Raku ### Published by Andrew Shitov on 2021-01-31T18:19:33 In this post, I’d like to demonstrate a few ways of computing factorials using the Raku programming language. ## 1 — reduction Let me start with the basic and the most effective (non necessarily the most efficient) form of computing the factorial of a given integer number: say [*] 1..10; # 3628800 In the below examples, we mostly will be dealing with the factorial of 10, so remember the result. But to make the programs more versatile, let us read the number from the command line: unit sub MAIN($n);

say [*] 1..$n; To run the program, pass the number: $ raku 00-cmd.raku 10
3628800

The program uses the reduction meta-operator [ ] with the main operator * in it.

You can also start with 2 (you can even compute 0! and 1! this way).

unit sub MAIN($n); say [*] 2..$n;

## 2 — for

The second solution is using a postfix for loop to multiply the numbers in the range:

unit sub MAIN($n); my$f = 1;
$f *=$_ for 2..$n; say$f;

This solution is not that expressive but still demonstrates quite a clear code.

## 3 — map

You can also use map that is applied to a range:

unit sub MAIN($n); my$f = 1;
(2..$n).map:$f *= *;

say $f; Refer to my article All the stars of Perl 6, or * ** * to learn more about how to read *= *. ## 4 — recursion Let’s implement a recursive solution. unit sub MAIN($n);

sub factorial($n) { if$n < 2 {
return 1;
}
else {
return $n * factorial($n - 1);
}
}

say factorial(n);

There are two branches, one of which terminates recursion.

## 5 — better recursion

The previous program can be rewritten to make a code with less punctuation:

unit sub MAIN($n); sub factorial($n) {
return 1 if $n < 2; return$n * factorial($n - 1); } say factorial($n);

Here, the first return is managed by a postfix if, and the second return can only be reached if the condition in if is false. So, neither an additional Boolean test nor else is needed.

## 6 — big numbers

What if you need to compute a factorial of a relatively big number? No worries, Raku will just do it:

say [*] 1..500;

The speed is more than acceptable for any practical application:

raku 06-long-factorial.raku  0.14s user 0.02s system 124% cpu 0.127 total

## 7 — small numbers

Let’s try something opposite and compute a factorial, which can fit a native integer:

unit sub MAIN($n); my int$f = 1;
$f *=$_ for 2..$n; say$f;

I am using a for loop here, but notice that the type of $f is a native integer (thus, 4 bytes). This program works with the numbers up to 20: $ raku 07-int-factorial.raku 20
2432902008176640000

## 8 — sequence

The fun fact is that you can add a dot to the first program

unit sub MAIN($n); say [*] 1 ...$n;

say [*] $n ... 1; ## 10 — sequence with definition Nothing stops us from defining the elements of the sequence with a code block. The next program shows how you do it: unit sub MAIN($n);

my @f = 1, * * ++$... *; say @f[$n];

This time, the program generates a sequence of factorials from 1! to $n!, and to print the only one we need, we take the value from the array as @f[$n]. Notice that the sequence itself is lazy and its right end is undefined, so you can’t use @f[*-1], for example.

The rule here is * * ++$ (multiply the last computed value by the incremented index); it is using the built-in state variable $.

## 11 — multi functions

The idea of the solutions 4 and 5 with two branches can be further transformed to using multi-functions:

unit sub MAIN($n); multi sub factorial(1) { 1 } multi sub factorial($n) { $n * factorial($n - 1) }

say factorial($n); For the numbers above 1, Raku calls the second variant of the function. When the number comes down to 1, recursion stops, because the first variant is called. Notice how easily you can create a variant of a function that only reacts to the given value. ## 12 — where The previous program loops infinitely if you try to set $n to 0. One of the simplest solution is to add a where clause to catch that case too.

unit sub MAIN($n); multi sub factorial($n where $n < 2) { 1 } multi sub factorial($n) { $n * factorial($n - 1) }

say factorial($n); ## 13 — operator Here’s another classical Raku solution: modifying its grammar to allow mathematical notation $n!.

unit sub MAIN($n); sub postfix:<!>($n) {
[*] 1..$n } say$n!;

## 14 — methodop

A rarely seen Raku’s feature called methodop (method operator) that allows you to call a function as it if was a method:

unit sub MAIN($n); sub factorial($n) {
[*] 1..$n } say$n.&factorial;

## 15 — cached

Recursive solutions are perfect subjects for result caching. The following program demonstrates this approach.

unit sub MAIN($n); use experimental :cached; sub f($n) is cached {
say "Called f($n)"; return 1 if$n < 2;
return $n * f($n - 1);
}

say f($n div 2); say f(10); This program first computes a factorial of the half of the input number, and then of the number itself. The program logs all the calls of the function. You can clearly see that, say, the factorial of 10 is using the results that were already computed for the factorial of 5: $ raku 15-cached-factorial.raku 10
Called f(5)
Called f(4)
Called f(3)
Called f(2)
Called f(1)
120
Called f(10)
Called f(9)
Called f(8)
Called f(7)
Called f(6)
3628800

Note that the feature is experimental.

## 16 — triangular reduction

The reduction operator that we already used has a special variant [\ ] that allows to keep all the intermediate results. This is somewhat similar to using a sequence in the example 10.

unit sub MAIN($n); my @f = [\*] 1..$n;

say @f[$n - 1]; ## 17 — division of factorials Now a few programs that go beyond the factorials themselves. The first program computes the value of the expression a! / b!, where both a and b are integer numbers, and a is not less than b. The idea is to optimise the solution to skip the overlapping parts of the multiplication sequences. For example, 10! / 5! is 6 * 7 * 8 * 9 * 10. To have more fun, let us modify Raku’s grammar so that it really parses the above expression. unit sub MAIN($a, $b where$a >= $b); class F { has$.n;
}

sub postfix:<!>(Int $n) { F.new(n =>$n)
}

sub infix:</>(F $a, F$b) {
[*] $b.n ^..$a.n
}

say $a! /$b!;

We already have seen the postfix:<!> operator. To catch division, another operator is defined, but to prevent catching the division of data of other types, a proxy class F is introduced.

To keep proper processing of expression such as 4 / 5, define another / operator that catches things which are not F. Don’t forget to add multi to both options. The callsame built-in routine dispatches control to built-in operator definitions.

. . .

multi sub infix:</>(F $a, F$b) {
[*] $b.n ^..$a.n
}

multi sub infix:</>($a,$b) {
callsame
}

say $a! /$b!;
say 4 / 5;

## 18 — optimisation

Let’s try to reduce the number of multiplications. Take a factorial of 10:

10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1

Now, take one number from each end, multiply them, and repeat the procedure:

10 * 1 = 10
9 * 2 = 18
8 * 3 = 24
7 * 4 = 28
6 * 5 = 30

You can see that every such result is bigger than the previous one by 8, 6, 4, and 2. In other words, the difference reduces by 2 on each iteration, starting from 10, which is the input number.

The whole program that implements this algorithm is shown below:

unit sub MAIN(
$n is copy where$n %% 2 #= Even numbers only
);

my $f =$n;

my $d =$n - 2;
my $m =$n + $d; while$d > 0 {
$f *=$m;
$d -= 2;$m += $d; } say$f;

It only works for even input numbers, so it contains a restriction reflected in the where clause of the MAIN function. As homework, modify the program to accept odd numbers too.

## 19 — integral

Before wrapping up, let’s look at a couple of exotic methods, which, however, can be used to compute factorials of non-integer numbers (or, to be stricter, to compute what can be called extended definition of it).

The proper way would be to use the Gamma function, but let me illustrate the method with a simpler formula:

An integral is a sum by definition, so let’s make a straightforward loop:

unit sub MAIN($n); my num$f = 0E0;
my num $dx = 1E-6; loop (my$x = $dx;$x <= 1; $x +=$dx) {
$f += (-log($x)) ** $n; } say$f * $dx; With the given step of 1E-6, the result is not that exact: $ raku 19-integral-factorial.raku 10
3086830.6595557937

But you can compute a ‘factorial’ of a floating-point number. For example, 5! is 120 and 6! is 720, but what is 5.5!?

$raku 19-integral-factorial.raku 5.5 285.948286477563 ## 20 — another formula And finally, the Stirling’s formula for the rescue. The bigger the n, the more correct is the result. The implementation can be as simple as this: unit sub MAIN($n);

# τ = 2 * π
say (τ * $n).sqrt * ($n / e) ** $n; But you can make it a bit more outstanding if you have a fixed $n:

say sqrt(τ * 10) * (10 / e)¹⁰;

* * *

And that’s it for now. You can find the source code of all the programs shown here in the GitHub repository github.com/ash/factorial.

## What's fez and what's the zef eco?

Fez is the tool used for uploading your dists to the zef ecosystem.  Subquestion: why the name fez?  Surely it does the opposite of zef and should be named as such.

If you're in a git repo then fez is creating the archives using

git archive --format tar ...
gzip ...

If you're not in a git repo then it's using tar

tar -czf .

If you'd like to upload a hand rolled archive (tar.gz) then you can use

fez upload --file <path to your artisinal tar.gz>

## Why YAE (yet another ecosystem)?

we have p6c. (yup.)
we have cpan. (ok.)

what's wrong with these?

Reason 1: neither of those repositories are more than repositories.  There is no quality control or even minor checks.  If you have a PAUSE id, you can upload anything to cpan.  If you have a github login, you can submit a MR to get your module included in p6c.  The basic/big dist checks in raku come with looking at versions and looking at names.  Let's look at the name problem for both ecosystems.

cpan: the naming problem is that there is an index that only exists on name.  If you and your arch rival both upload a dist with the name Basketball that differ only by auth, then only one of yours is showing up in the main listing.

p6c: the naming problem here is a little more abhorrent.  if you and your nemesis both upload competing Emulator::DOS dists but in your haste you both do the minimum effort to make your module available then it's a crap shoot as to who can rule the ecosystem (hint: it's neither of you because my version is * which is another discussion).

Reason 2: * version wholly wrecks the current ecosystem.  Uploading a module with an asterisk version matches every module version with that name.  Interested in downloading the very nice DB::Pg?  Too bad for you unless you want to type zef install DB::Pg:auth<github:CurtTilmes>. Every. Time. Why? Because I just uploaded a dist with all of the same files/supplies with the word die; in them but the META version is *.  Version.new('*') matches every version, so no matter which one you request, you can always get mine.

Reason 3: you can be reasonably sure who you're getting the actual tarball from on cpan.  You can, too, with git.  It's in the url.  You can't be reasonably sure where you got the file from post install because anything goes in the metas for both cpan and p6c.  This isn't the strongest of reasons but it's certainly important if you're going to allow multiple dists to share one name and you really need to figure out why use XML seems to be flapping.

Are these reasons really that big that it necessitates an entirely new infrastructure and ecosystem?

Yes. When you hack together tools not built for the purpose then you'll always be hammering with the end of the screwdriver.  Your effort is spent making a system usable rather than starting from usable and making it awesome.  Submitting patches to cpan is great but it seems more than obnoxious to introduce side effects into a system that's serving perl so well.

Other reasons in no particular order:

cpan: is very tedious to use and register with.

p6c: versions are volatile if specific commits aren't used in the source and getting bug fixes is irritating if the author forgets to update the meta (you can get around it with zef install --force.

## Where can I browse the zef ecosystem packages?

Currently only https://raku.land is indexing the ecosystem in the browser.  If you'd like to just peek at everything available you can see the JSON index at https://360.zef.pm.

When can you see them on modules.raku.org? Stay tuned.

## So what does zef eco do, then?

The zef eco has quality control around the version and auth of the dists being uploaded.

Does this dist already exist?  If yes, it won't be indexed.

An absent or * version is uploaded? Won't be indexed.

Does the META auth match the uploader?  If not, then it won't be indexed.

## Where's the server code, dude?

Longer answer: the ecosystem is running on AWS lambda, S3, and cloudfront.  All of the login/upload logic exists inside of a function in lambdas.  There are back ups but the code itself isn't organized in any way that it can be exposed.

## Indexing:

### What happens if my module index fails?

You'll get an email.

### How frequently does zef eco index?

As soon as you upload.  The TTL on caching (at this very moment) is set to two minutes, so you'll have to wait at least that long.

## Death by Perl6: fez|zef - a raku ecosystem and auth

fez is a utility for interacting with the zef ecosystem.  you can think of it as the opposite of zef. zef downloads distributions and installs them and fez uploads making them available to zef.

## how does this work?

there exists a myriad of ways in which to make an ecosystem for raku.  two current implementations include the current p6c (which is a text file of git repos) and, the other, cpan which works much in the same way as it does with perl (keeps an index based on <name>).  Both of these implementations present their own challenges when working with raku distributions, the challenges are too off topic to really get into in this article and they're left for a follow up article.

so! zef's ecosystem is built on top of s3, lambda, and cloudfront. for all intents and purposes it's a file system exposed over http.  there is a json index which is a master list of all available distributions enriched with the path those distributions can be had from (spoiler alert: it's a hash of the tar.gz file that gets uploaded and placed in a specific directory).  you can find the master list at https://360.zef.pm/index.json.  subsequently, you can information about specific names by using the name.  eg, a distribution named fez will have meta info available at https://360.zef.pm/F/EZ/FEZ/index.json (names with :: are converted :: => \_).

zef's ecosystem also ensures that the auth matches.  one limitation of cpan is that you can upload under your PAUSE ID but the distribution you upload can contain any information in the meta (or no meta), including a meaningless auth value.  zef's ecosystem only indexes distributions whose meta<auth> match that of the uploader.  you can be reasonably assured that when you request HTTP::Tiny:auth\<zef:jjatria\> using the zef ecosystem that you're not getting something by the same dist name from someone who forked and forgot to update the meta information or any other shenanigans.

zef's ecosystem rejects any dist with a version of *.  why is this?  because * supercedes every other possible version.  if zef allowed this then you could own the entire ecosystem by publishing a module with version * for every possible name, forcing consumers to specify version.

zef also does some basic sanity checks on the meta data.  if the uploaded module has a bad meta file then it's rejected by the fez indexer.  no more finding modules available that won't install because the meta data is bad.

## cool, what do i need to get started?

all you need is zef: check it out!

zef install fez
...
fez register
>>= Email: [email protected]
>>= Registration successful, requesting auth key
>>= What would you like your display name to show? tony o
>>= Public email address? [email protected]
=<< Your meta info has been updated


now you're off and running.  once you have your module sitting pretty and ready for edgar to download it's only the simple command of:

my_module_dir$fez upload  ## what version of zef do i need? zef > 0.10.0 should work with zef's ecosystem. ## why now? ugexe and i spent a ton of time deliberating how this ecosystem should work back in 2013 when we started the zef project. if you're a sleuth you can dig through zef's commits, an ecosystem was up and alpha test worthy back then but the time wasn't right (beware of the easter dragons). nine was working on precomp, zef was in its infancy, panda had its own (inflexible) way of being modularized, the state of the S# for distributions was in constant flux, there were far too many factors and possibilities to make pursuing the ecosystem sane. if an ecosystem had been designed then it certainly wouldn't have lived up to the expectations and it would've either painted us into a corner regarding how dists work or choked on its own obsolescence and cruft. ## i want to contribute but don't know how if you want to support the effort, PRs and feature ideas are always welcome. you can find me as tony-o in #raku on freenode or, if supporting the cost of running the ecosystem is more your speed then you can donate here. ## Andrew Shitov: The course of Raku ### Published by Andrew Shitov on 2021-01-13T08:44:00 I am happy to report that the first part of the Raku course is completed and published. The course is available at course.raku.org. The grant was approved a year and a half ago right before the PerlCon conference in Rīga. I was the organiser of the event, so I had to postpone the course due to high load. During the conference, it was proposed to rename Perl 6, which, together with other stuff, made me think if the course is needed. After months, the name was settled, the distinction between Perl and Raku became clearer, and, more importantly, external resourses and services, e.g., Rosettacode and glot.io started using the new name. So, now I think it is still a good idea to create the course that I dreamed about a couple of years ago. I started the main work in the middle of November 2020, and by the beginning of January 2021, I had the first part ready. The current plan includes five parts: 1. Raku essentials 2. Advanced Raku subjects 3. Object-oriented programming in Raku 4. Regexes and grammars 5. Functional, concurrent, and reactive programming It differs a bit from the original plan published in the grant proposal. While the material stays the same, I decided to split it differently. Initially, I was going to go through all the topics one after another. Now, the first sections reveal the basics of some topics, and we will return to the same topics on the next level in the second part. For example, in the first part, I only talk about the basic data types: IntRatNumStrRangeArrayList, and Hash and basic usage of them. The rest, including other types (e.g., Date or DateTime) and the methods such as @array.rotate or %hash.kv is delayed until the second part. Contrary, functions were a subject of the second part initially, but they are now discussed in the first part. So, we now have Part 1 “Raku essentials” and Part 2 “Advanced Raku topics”. This shuffling allowed me to create a liner flow in such a way that the reader can start writing real programs already after they finish the first part of the course. I must say that it is quite a tricky task to organise the material without backward links. In the ideal course, any topic may only be based on the previously explained information. A couple of the most challenging cases were ranges and typed variables. They both causes a few chicken-and-egg loops. During the work on the first part, I also prepared a ‘framework’ that generates the navigation through the site and helps with quiz automation. It is hosted as GitHub Pages and uses Jekyll and Liquid for generating static pages, and a couple of Raku programs to automate the process of adding new exercises and highlighting code snippets. Syntax highlighting is done with Pygments. Returning the to course itself, it includes pages of a few different types: • The theory that covers the current topic • Interactive quizzes that accomplish the theory of the topic and/or the section • Exercises for the material of the whole section • Answers to the exercises The quizzes were not part of the grant proposal, but I think they help making a better user experience. All the quizzes have answers and comments. All the exercises are solved and published with the comments to explain the solution, or even to highlight some theoretical aspects. The first part covers 91 topics and includes 73 quizzes and 65 exercises (with 70 solutions :-). There are about 330 pages in total. The sources are kept in a GitHub repository github.com/ash/raku-course, so people can send pull requiest, etc. At this point, the first part is fully ready. I may slightly update it if the following parts require additional information about the topics covered in Part 1. This text is a grant report, and it is also (a bit modified) published at https://news.perlfoundation.org/post/rakucourse1 on 13 January 2021. ## Jo Christian Oterhals: What did we learn from an astronomer’s hacker hunt in the 80's? Apparently, not too much ### Published by Jo Christian Oterhals on 2020-12-29T19:55:31 C omputer security has seen its share of mind-boggling news lately. None more mind boggling than the news about how alleged Russian hackers installed a backdoor into the IT monitoring product Solarwind Orion. Through this they got got entrance into the computer systems of several US agencies and departments — ironically even into the systems of a cyber security company (Fireeye) and Microsoft itself . The news made me think of my own history with computer security, and down memory lane I went. One particular day in late July or early August 1989 my parents, sister and me were driving home from a short summer vacation. At a short stop in a largish city, I had found a newsstand carrying foreign magazines. There I’d bought a copy of PC/Computing’s September issue (to this day I don’t understand why American magazines are on sale a couple of months before the cover date) so that I had something to make time in the backseat pass faster. Among articles about the relatively new MS-DOS replacement OS/2 (an OS co-developed with IBM that Microsoft would come to orphan the minute they launched Windows 3.0 and understood the magnitude of the success they had on their hands) and networking solutions from Novell (which Windows would kill as well, albeit more indirectly), the magazine brought an excerpt of the book “The Cuckoo’s Egg” by a guy named Clifford Stoll. Although I had bought the magazine for the technical information such as the stuff mentioned above, this excerpt stood out. It was a mesmerising story about how an astronomer-turned-IT-guy stumbled over a hacker, and how he, aided by interest but virtually no support from the FBI, CIA and NSA, almost single handedly traced the hacker’s origins back to a sinister government sponsored organisation in the then communist East Germany. This is the exact moment I discovered that my passion — computers — could form the basis of a great story. Coastal Norway where I grew up is probably as far from the author’s native San Francisco as anything; at least our book stores were. So it wasn’t until the advent of Amazon.com some years later that I was able to order a copy of the book. Luckily, the years passed had not diminished the story. Granted, the Internet described by the author Clifford Stoll was a little more clunky than the slightly more modern dial-up internet I ordered the book on. But subtract the World Wide Web from the equation and the difference between his late eighties internet and my mid-nineties modem version weren’t all that big. My Internet was as much a monochrome world of telnet and text-only servers as it was a colourful web. Email, for instance, was something I managed by telnetting into a HP Unix server and using the command line utility pine to read and send messages. What struck me with the story was that the hacker’s success very often was enabled by sloppy system administration; one could arguably say that naïve or ignorant assumptions by sysadmins all across the US made the hack possible. Why sloppy administration and ignorant assumptions? Well, some of the reason was that the Internet was largely run by academia back then. Academia was (and is) a culture of open research and sharing of ideas and information. As such it’s not strange that sysadmins of that time assumed that users of the computer systems had good intentions too. But no one had considered that the combination of several (open) sources of information and documents stored on these servers, could end in very comprehensive insight into, say, the Space Shuttle program or military nuclear research. Actually, the main downside to unauthorised usage had to do with cost: processing power was expensive at the time and far from a commodity. Users were billed by for their computer usage. So billing was actually the reason why Stoll started his hacker hunt. There was a few cents worth of computer time that couldn’t be accounted for. Finding out whether this was caused by a bug or something else, was the primary goal of Mr. Stoll’s hunt. What the hacker had spent this time on was — at first — a secondary issue at best. With that in mind it’s maybe not so strange that one of the most common errors made was not changing default passwords on multi-user computers connected to the internet. One of the systems having a default password was the now virtually extinct VAX/VMS operating system for Digital’s microcomputer series VAX. This was one of the things Mr. Stoll found out by logging each and every interaction the hacker, using Stoll’s compromised system as a gateway, had with other systems (the description of how he logged all this by wiring printers up to physical ports on the microcomputer, rewiring the whole thing every time the hacker logged on through another port, is by itself worth reading the book for). Using the backdoor, the hacker did not only gain access to that computer — they got root privileges as well. In the 30+ years passed since I read the book I’ve convinced myself about two things: 1) we’ve learned to not use default passwords anymore, and 2) that VMS systems exhibiting this kind of backdoor are long gone. Well, I believed these things until a few weeks ago. That’s when I stumbled on to a reddit post — now deleted, but there still is a cached version available on Waybackmachine. Here the redditor explained how he’d identified 33 remaining VAX/VMS systems still on the Internet: About a year ago I read the book “A Cuckoo’s Egg”, written in 1989. It included quite a bit of information pertaining to older systems such as the VAX/VMS. I went to Censys (when their big data was still accessible easily and for free) and downloaded a set of the telnet (port 23) data. A quick grep later and I had isolated all the VAX/VMS targets on the net. Low and behold, of the 33 targets (I know, really scraping the bottom of the barrel here) more than half of them were still susceptible to default password attacks literally listed in a book from almost 3 decades ago. After creating super user accounts I contacted these places to let them know that that they were sporting internet facing machines using default logins from 1989. All but one locked me out. This is 31 years later… The future will be a mess, kids I applaud the redditor that discovered this. Because isn’t what he found a testament of something breathtakingly incompetent and impressive at the same time? Impressive in the sense that someone’s been able to keep these ancient systems alive on the internet for decades; incompetent because the sysadmins has ignored patching the most well documented security flaw of those systems for well over a quarter century? So maybe this starts to answer question posed in the title: Did we learn anything from this? Yes, of course we did. If we look past the VMS enthusiast out there, computer security is very different now than back then. Unencrypted communication is almost not used anywhere anymore. Security is provided by multilayered hardware and software solutions. In addition are not only password policies widely enforced on users, but two-factor and other extra layers of authentication is used as well. But the answer is also No. While my organisations such as my workplace — which is not in the business of having secrets — has implemented lots of the newest security measures, this autumn we learned that the Norwegian parliament — which is in the business of having secrets — haven’t. They had weak password policies and no two-factor authentication for their email system. Consequently they recently became an easy target for Russian hackers. I obviously don’t know what was the reasoning behind having weak security implemented. But my guess is that the IT department assessed the digital competence of the parliament members and concluded that it was too low for them to handle strong passwords and managing two-factor authentication. And this is perhaps the point where the security of yesteryear and security today differs the most: As we’re closing in on 2021, weak security is a conscious choice; but it is the same as leaving the door wide open, and any good sysadmin knows it. The ignorance exhibited in the case of the Norwegian parliament borders, in my opinion, on criminal ignorance — although I guess no one will ever have to take the consquence. What it does prove, however, is that while systems can be as good as anything, people are still the weakest link in any such system. In sum I think my answer to the initial question is an uneasy Maybe. We still have some way to go before what Cliff Stoll taught us 32 years ago has become second nature. ## Raku Advent Calendar: Day 25: Reminiscence, refinement, revolution ### Published by jnthnwrthngtn on 2020-12-25T01:01:00 By Jonathan Worthington ## Raku release reminiscence Christmas day, 2015. I woke up in the south of Ukraine – in the very same apartment where I’d lived for a month back in the spring, hacking on the NFG representation of Unicode. NFG was just one of the missing pieces that had to fall into place during 2015 in order for that Christmas – finally – to bring the first official release of the language we now know as Raku. I sipped a coffee and looked out onto a snowy courtyard. That, at least, was reassuring. Snow around Christmas was relatively common in my childhood. It ceased to be the year after I bought a sledge. I opened my laptop, and took a look at the #perl6-dev IRC channel. Release time would be soon – and I would largely be a spectator. My contributions to the Rakudo compiler had started eight years prior. I had no idea what I was getting myself into, although if I had known, I’m pretty sure I’d still have done it. The technical challenges were, of course, fascinating for somebody who had developed a keen interest in languages, compilers, and runtimes while at university. Larry designs languages with an eye on what’s possible, not on what’s easy, for the implementer. I learned, and continue to learn, a tremendous amount by virtue of working on Raku implementation. Aside from that, the regular speaking at workshops and conferences opened the door to spending some years as a teacher of software development and architecture, and after that left me with a wealth of knowledge to draw on as I moved on to focus on consultancy on developer tooling. Most precious of all, however, are the friendships forged with some of those sharing the Raku journey – which I can only imagine lasting a lifetime. Eight years had gone by surprisingly quickly. When one is involved with the day-to-day development, the progress is palpable. This feature got done, this bug got fixed, this design decision got made, this library got written, this design decision got re-made for the third time based on seeing ways early adopters stubbed their toe, this feature got re-implemented for the fourth time because of the design change… From the outside, it all looks rather different; it’s simply taking forever, things keeping getting re-done, and the developers of it “have failed us all” (all seasons have their grinches). Similarly, while from the outside the Christmas release was “the big moment”, from the inside, it was almost routine. We shipped a Rakudo compiler release, just as we’d been doing every month for years on end. Only this time, we also declared the specification test suite – which constitutes the official specification of the language – as being an official language release. The next month would look much like the previous ones: more bug reports arrive, more things get fixed and improved, more new users show up asking for guidance. The Christmas release was the end of the beginning. But the beginning is only the start of a story. ## Regular Raku refinement It’s been five years since That Christmas. Time continues to fly by. Each week brings its advances – almost always documented in what’s currently known as the Rakudo Weekly, the successor of the Perl 6 Weekly. Some things seem constant: there’s always some new bug reports, there will always be something that’s under-documented, no matter what you make fast there will always be something else that is still slow, no matter how unlikely it seemed someone would depend on an implementation detail they will have anyway, new Unicode versions require at least some effort, and the latest release of MacOS seemingly always requires some kind of tweak. Welcome to the life of those working on a language implementation that’s getting some amount of real-world use. Among the seemingly unending set of things to improve, it’s easy to lose sight of just how far the Raku Programming Language, its implementation in Rakudo, and the surrounding ecosystem has come over the last five years. Here I’ll touch on just a few areas worthy of mention. ### Maturity Maturity of a language implementation, its tools, and its libraries, is really, really, hard won. There’s not all that many shortcuts. Experience helps, and whenever a problem can be avoided in the first place, by having somebody about with the background to know to do so, that’s great. Otherwise, it’s largely a case of making sure that when there are problems, they get fixed and something is done to try and avoid a recurrence. It’s OK to make mistakes, but making the same mistake twice is a bit careless. The most obvious example of this is making sure that all implementation bugs have test coverage before the issue is considered resolved. However, being proactive matters too. Prior to every Rakudo compiler release, the tests of all modules in the ecosystem are run against it. Regressions are noted, and by now can even be automatically bisected to the very commit that caused the regression. Given releases take place every month, and this process can be repeated multiple times a month, there’s a good chance the developer whose work caused the regression will have it relatively fresh in their mind. A huge number of things have been fixed and made more robust over the last five years, and there are tasks I can comfortably reach to Raku for today that I wouldn’t have five years back. Just as important, the tooling supporting the development and release process has improved too, and continues to do so. ### Modules There are around 3.5x as many modules available as there were five years ago, which means there’s a much greater chance of finding a module to do what you need. The improvement in quantity is easy enough to quantify (duh!), but the increase in quality has also had a significant impact, given that many problems we need to solve draw on a relatively small set of data formats, protocols, and so forth. Just to pick a few examples, 5 years ago we didn’t have: • Cro, which is now the most popular choice for building web applications and services in Raku. Cro wasn’t a port of any existing library, but rather designed from scratch to make the most of the Raku language. • DB::Pg and friends: while DBIish, which existed 5 years ago, has become far more mature, DB::Pg provides a well-engineered alternative that – at least to me – has an API that feels more natural in Raku. • Red – an ORM for Raku that puts the meta-programming powers of the language to good use. It’s marked up as a work in progress, but looks promising. • LibXML – an extensive Raku binding to libxml native library • IO::Socket::Async::SSL – yes, at the time of the Christmas release, for all of the nice async things we had, there wasn’t yet an asynchronous building of the OpenSSL library. Given how regularly I’ve used many of these in my recent work using Raku, it’s almost hard to imagine that five years ago, none of them existed! ### An IDE I was somewhat reluctant to put this in as a headline item, given that I’m heavily involved with it, but the Comma IDE for Raku has been a game changer for my own development work using the language. Granted IDEs aren’t for everyone or everything; if I’m at the command line and want to write a short script, I’ll open Vim, not fire up Comma. But for much beyond that, I’m glad of the live analysis of my code to catch errors, quick navigation, auto-complete, effortless renaming of many program elements, and integrated test runner. The timeline view can offer insight into asynchronous programs, especially Cro services, while the grammar live view comes in handy when writing parsers. While installing an IDE just for a REPL is a bit over the top, Comma also provides a REPL with syntax highlighting and auto-completion too. Five years ago, the answer to “is there an IDE for Raku” was “uhm, well…” Now, it’s an emphatic yes, and for some folks to consider using the language, that’s important. ### Performance The Christmas release of Raku was not speedy. While just about everyone would agree there’s more work needed in this area, the situation today is a vast improvement on where we were five years ago. This applies at all levels of the stack: MoarVM’s runtime optimizer and JIT have learned a whole bunch of new tricks, calls into native libraries have become far more efficient (to the benefit of all bindings using this), the CORE setting (Raku’s standard library) has seen an incredible number of optimizations, and some key modules outside of the core have received performance analysis and optimization too. Thanks to all of that, Raku can be considered “fast enough” for a much wider range of tasks than it could five years ago. ### And yes, the name Five years ago, Raku wasn’t called Raku. It was called Perl 6. Name change came up now and then, but was a relatively fringe position. By 2019, it had won sufficient support to take place. A year and a bit down the line, I think we can say the rename wasn’t a panacea, but nor could it be considered a bad thing for Raku. While I’ve personally got a lot of positive associations with the name “Perl”, it does carry an amount of historical baggage. One of the more depressing moments for me was when we announced Comma, and then saw many of the comments from outside of the community consist of the same tired old Perl “jokes”. At least in that sense, a fresh brand is a relief. Time will tell what values people will come to attach to it. ## Rational Raku revolution With software, it feels like the ideal time to start working on something is after having already done most of the work on it. At that point, the required knowledge and insight is conveniently to hand, and at least a good number of lessons wouldn’t need to be learned the hard way again. Alas, we don’t have a time machine. But we do have an architecture that gives us a chance of being able to significantly overhaul one part of the stack at a time, so we can use what we’ve learned. A number of such efforts are now underway, and stand to have a significant impact on Raku’s next five years. The most user-facing one is known as “RakuAST”. It involves a rewrite of the Rakudo compiler frontend that centers around a user-facing AST – that is, a document object model of the program. This opens the door to features like macros and custom compiler passes. These may not sound immediately useful to the everyday Raku user, but will enable quite a few modules to do what they’re already doing in a better way, as well as opening up some new API design possibilities. Aside from providing a foundation for new features, the compiler frontend rewrite that RakuAST entails is an opportunity to eliminate a number of long-standing fragilities in the current compiler implementation. This quite significant overhaul is made achievable by the things that need not change: the standard library, the object system implementation, the compiler backend, and the virtual machine. A second ongoing effort is to improve the handling of dispatch in the virtual machine, by introducing a single more general mechanism that will replace a range of feature-specific optimizations today. It should also allow better optimization of some things we presently struggle with. For example, using deferral via callsame, or where clauses in multiple dispatch, comes with a high price today (made to stand out because many other constructs in that space have become much faster in recent years). The goal is to do more with less – or at least, with less low-level machinery in the VM. It’s not just the compiler and runtime that matter, though. The recently elected Raku Steering Council stands to provide better governance and leadership than has been achieved in the last few years. Meanwhile, efforts are underway to improve the module ecosystem and documentation. Today, we almost take for granted much of the progress of the last five years. It’s exciting to think what will become the Raku norm in the next five. I look forward to creating some small part of that future, and especially to seeing what others – perhaps you, dear reader – will create too. ## Raku Advent Calendar: Day 24: Christmas-oriented programming, part deux ### Published by jjmerelo on 2020-12-24T01:01:00 In the previous installment of this series of articles, we started with a straightforward script, and we wanted to arrive to a sound object-oriented design using Raku. Our (re)starting point was this user story: [US1] As a NPCC dean, given I have a list of classrooms (and their capacity) and a list of courses (and their enrollment), I want to assign classrooms to courses in the best way possible. And we arrived to this script: my$courses = Course-List.new( "docs/courses.csv");
my $classes = Classroom-List.new( "docs/classes.csv"); say ($classes.list Z $courses.list ) .map( {$_.map( { .name } ).join( "\t→\t") }  )
.join( "\n" );

That does not really cut it, though. Every user story must be solved with a set of tests. But, well, the user story was kinda vague to start with: “in the best way possible” could be anything. So it could be argued that the way we have done is, indeed, the best way, but we can’t really say without the test. So let’s reformulate a bit the US:

[US1] As a NPCC dean, given I have a list of classrooms (and their capacity) and a list of courses (and their enrollment), I want to assign classrooms to courses so that no course is left without a classroom, and all courses fit in a classroom.

This is something we can hold on to. But of course, scripts can’t be tested (well, they can, but that’s another story). So let’s give this script a bit of class.

## Ducking it out with lists

Actually, there’s something that does not really cut it in the script above. In the original script, you took a couple of lists and zipped it together. Here you need to call the .list method to achieve the same. But the object is still the same, right? Shouldn’t it be possible, and easy, to just zip together the two objects? Also, that begs the client of the class to know the actual implementation. An object should hide its internals as much as possible. Let’s make that an issue to solve

As a programmer, I want the object holding the courses and classrooms to behave as would a list in a “zipping” context.

Santa rubbed his beard thinking about how to pull this off. Course-List objects are, well, that precise kind of objects. They include a list, but, how can they behave as a list? Also, what’s precisely a list “in a zipping context”.

Long story short, he figured out that a “zipping context” actually iterates over every member of the two lists, in turn, putting them together. So we need to make the objects Iterable. Fortunately, that’s something you can definitely do in Raku. By mixing roles, you can make objects behave in some other way, as long as you’ve got the machinery to do so.

unit role Cap-List[::T] does Iterable;

has T @!list;

submethod new( $file where .IO.e ) {$file.IO.lines
==> map( *.split( /","\s+/) )
==> map( { T.new( @_[0], [email protected]_[1] ) } )
==> sort( { -$_.capacity } ) ==> my @list; self.bless( :@list ); } submethod BUILD( :@!list ) {} method list() { @!list } method iterator() {@!list.iterator} With respect to the original version, we’ve just mixed in the Iterable role and implemented an iterator method, that returns the iterator on the @!list attribute. That’s not the only thing we need for it to be in “a zipping context”, however. Which begs a small digression on Raku containers and binding. ### Containers and containees El cielo esta entablicuadrillado, ¿quién lo desentablicuadrillará? El que lo entablicuadrille, buen entablicuadrillador será. — Spanish tongue twister, loosely translated as “The sky is tablesquarebricked, who will de-trablesquarebrick it? The tablesquarebrickalyer that tablesquaresbricks it, good tablesquarebrickalyer will be. It’s worth the while to check out this old Advent article, by Zoffix Znet, on what’s binding and what’s assignment in the Raku world. Binding is essentially calling an object by another name. If you bind an object to a variable, that variable will behave exactly the same as the object. And the other way round. my$courses := Course-List.new( "docs/courses.csv");

We are simply calling the right hand side of this binding by another name, which is shorter and more convenient. We can call any method, and also we can put this “in a zipping context” by calling for on it:

.name.say for $courses; Will return Woodworking 101 Toymaking 101 ToyOps 310 Wrapping 210 Ha-ha-haing 401 Reindeer speed driving 130  As you can see, the “zipping context” is exactly the same as the (not-yet-documented) iterable context, which is also invoked (or coerces objects into, whatever you prefer) when used with for. for$courses will actually call $courses.iterator, returning the iterator of the list it contains. This is not actually a digression, this is totally on topic. I will have to digress, however, to explain what would have happened in the case we would have used normal assignment, as in my$boxed-courses = Course-List.new( "docs/courses.csv");

Assignment is a nice and peculiar thing in Raku. As the above mentioned article says, it boxes an object into a container. You can’t easily box any kind of thing into a Scalar container, so, Procusto style it needs to fit it into the container in a certain way. But any way you think about it, the fact is that, unlike before, $boxes-courses is not a Course-List object; it’s a Scalar object that has scalarized, or itemized, a Course-List object. What would you need to de-scalarize it? Simply calling the de-cont operator on it, $boxed-courses<>, which unwraps the container and gives you what’s inside.

## Scheduler classes

OK, back to our regular schedule…r.

Again, don’t let’s just try to do things as we see fit. We need to create an issue to fix

• As a programmer, I need a class that creates schedules given a couple of files with courses and classes.

Santa is happy to prove such a thing:

use Course-List;
use Classroom-List;

unit class Schedule;

has @!schedule;

submethod new( $courses-file where .IO.e,$classes-file where .IO.e) {

my $courses := Course-List.new($courses-file);
my $classes := Classroom-List.new($classes-file);
my @schedule = ($classes Z$courses).map({ $_.map({ .name }) }); self.bless(:@schedule); } submethod BUILD( :@!schedule ) {} method schedule() { @!schedule } method gist { @!schedule.map( { .join( "\t⇒\t" ) } ).join("\t"); } Not only it schedules courses, you can simply use it by saying it. It’s also tested, so you know that it’s going to work no matter what. With that, we can close the user story. But, can we? ## Wrapping up with a script Santa was really satisfied with this new application. He only needed to write this small main script: use Schedule; sub MAIN($courses-file where .IO.e = "docs/courses.csv",
$classes-file where .IO.e = "docs/classes.csv") { say Schedule.new($courses-file, $classes-file) } Which was straight and to the point: here are the files, here’s the schedule. But, besides, it was tested, prepared for the unexpected, and could actually be expanded to take into account unexpected events (what happens if you can’t fit elves into a particular class? What if you need to take into account other constraints, like not filling biggest first, but filling smuggest first? You can just change the algorithm, without even changing this main script. Which you don’t really need: raku -Ilib -MSchedule -e "say Schedule.new( | @*ARGS )" docs/courses.csv docs/classes.csv using the command line switches for the library search path (-I) and loading a module ( -M) you can just write a statement that will take the arguments and flatten them to make them into the method’s signature. Doing this, Santa sat down in his favorite armchair to enjoy a cup of cask-aged eggnog and watch every Santa-themed movie that was being streamed until next NPCC semester started. ## Raku Advent Calendar: Day 23: Christmas-oriented design and implementation ### Published by jjmerelo on 2020-12-22T22:44:12 Every year by the beginning of the school year, which starts by January 8th in the North Pole, after every version of the Christmas gift-giving spirit has made their rounds, Santa needs to sit down to schedule the classes of the North Pole Community College. These elves need continuous education, and they need to really learn about those newfangled toys, apart from the tools and skills of the trade. Plus it’s a good thing to have those elves occupied during the whole year in something practical and useful, so that they don’t start to invent practical jokes and play them on each other. Since there are over one million elves, the NPCC is huge. But there’s also a huge problem assigning courses to classrooms. Once registration for classes is open, they talk to each other about what’s the ultimate blow off class, which one gives you extra credit for winning snowball fights. So you can’t just top up enrollment: every year, you need to check the available classrooms, and then match it to the class that will be the most adequate for it. Here are the available classrooms: Kris, 33 Kringle, 150 Santa, 120 Claus, 120 Niklaas, 110 Reyes Magos, 60 Olentzero, 50 Papa Noël, 30  They’re named after gift-giving spirits from all over the world, with the biggest class obviously named Kringle. In any given year, this could be the enrollment after the registration period is over. Woodworking 101, 130 Toymaking 101, 120 Wrapping 210, 40 Reindeer speed driving 130, 30 ToyOps 310, 45 Ha-ha-haing 401, 33  They love woodworking 101, because it’s introductory, and they get to keep whatever assignment they do during the year. Plus you get all the wood parings for burning in your stove, something immensely useful in a place that’s cold all year long. So Santa created this script to take care of it, using a bit of point free programming and Perl being Perl, the whippipitude and dwimmability of the two sister languages, Perl and Raku. sub read-and-sort($file where .IO.e ) {
$file.IO.lines ==> map( *.split( /","\s+/) ) ==> sort( { -$_[1].Int } )
==> map( { Pair.new( |@_ ) } )
}

.map( {  $_.map( { .key } ).join( "\t→\t") } ) .join( "\n" ) The subroutine reads the file given its name, checking that it exists before, splits it by the comma, sorts it in decreasing number, and then creates a pair out of it. The other command uses the Z operator to zip the two lists together in decreasing order of elves, and produce a list just like this one: Kringle → Woodworking 101 Santa → Toymaking 101 Claus → ToyOps 310 Niklaas → Wrapping 210 Reyes Magos → Ha-ha-haing 401 Olentzero → Reindeer speed driving 130  So the Kringle lecture hall gets woodworking, and it goes down from there. The Kris and Papa Noël classroom get nothing, having been eliminated but kept there to be used for extra-curricular activities such as carol singing and origami. All this works well while it does: as long as you remember where’re the files, what the script did, nothing changes name or capacity, and the files are not lost. But those are a lot of ifs, and Santa is not getting any younger As a matter of fact, not getting any older either. So Santa and its ToyOps team will need a more systematic approach to this scheduling, by creating an object oriented application from requirements. After learning all about TWEAKS and roles, now it’s the time to stand back and put it to work from the very beginning. ## Agile scheduling The cold that pervades the North Pole makes everything a little less agile. But no worries, we can still be agile when we create something for IT operations there. First thing we need are user stories. Who wants to create a schedule and what is it? So let’s sit down and write them down. • [US1] As a NPCC dean, given I have a list of classrooms (and their capacity) and a list of courses (and their enrolment), I want to assign classrooms to courses in the best way possible. OK, we got something to work on here, so we can apply a tiny bit of domain driven design. We have a couple of entities, classrooms and courses, and a few value objects: single classrooms and single courses. Let’s go and write them. Using Comma, of course. Using classes for classes is only natural. But looking at the two defined classes, Santa couldn’t say which was which. At the end of the day, something with a name and a capacity is something with a name and a capacity. This begs for a factoring out of the common code, following the DRY (don’t repeat yourself) principle. Besides, we have a prototype above that pretty much says that whatever we use for classrooms and courses, our life is going to be easier if we can sort it in the same way. So it’s probably best if we spin off a role with the common behavior. Let’s make an issue out of that. Pretty much the same as the US, but the protagonist is going to be a programmer: • As a programmer, I need to sort classrooms and courses by capacity using the same. Let’s call it Capped, as in having a certain capacity. Since both objects will have the self-same method, capacity, we can call it to sort them out. Our example above shows that we need to create an object out of a couple of elements, so that’s another issue • As a programmer, I need to build an object using positional arguments that are strings. So finally our role will have all these things: unit role Cap; has Str$!name is required;
has Int $!capacity is required; submethod new( Str$name, Int $capacity ) { self.bless( :$name, :$capacity ) } submethod BUILD( :$!name, :$!capacity ) {}; method name() {$!name }
method capacity() { $!capacity } plus handy accessors for name and capacity, all the while keeping these privates, also implying that they are immutable. Value objects are things that simply get a value, there’s not much business logic to them. We could already create a function that does the sorting out of a list of classrooms/courses, that is, Caps, but in OO design we should try and put into classes (from which we spawn objects) as many entities from the original problem as possible. These entities will be the ones actually doing the heavy lifting. It would be great, again, if we could create kinda the same things, because we will able to handle them uniformly. But there’s the conundrum: one of them will contain a list of Courses, another a list of Classrooms. They both do Cap, so in principle we could create a class that hosts lists of Caps. But this controller class will have some business logic: it will create objects of that class; we can’t simply use Roles to create classes that compose them. So we will use a curried Role, a parametrized role that uses, as a parameter, the role we’ll be instantiating it with. This will be Cap-List: unit role Cap-List[::T]; has T @!list; submethod new($file where .IO.e ) {
$file.IO.lines ==> map( *.split( /","\s+/) ) ==> map( { T.new( @_[0], [email protected]_[1] ) } ) ==> sort( { -$_.capacity } )
==> my @list;
self.bless( :@list );
}
submethod BUILD( :@!list ) {}

method list() { @!list }

This code is familiar and similar to what we’ve done above, except we’re swapping the object creation and sorting list, and we’re use .capacity to sort the list. We create a list and bless it into the object. Out of that, we create a couple of classes:

unit class Classroom-List does Cap-List[Classroom];
unit class Course-List does Cap-List[Course];

We don’t need any more logic; that’s all there is. It’s essentially the same thing, same business logic, but we’re working in a type-safe way. We have also tested the whole thing, so we’ve frozen the API and protected it from future evolution. Which Santa approves.

So we’re almost there. Let’s write the assignment function with this:

my $courses = Course-List.new( "docs/courses.csv"); my$classes = Classroom-List.new( "docs/classes.csv");
say ($classes.list Z$courses.list )
.map( {  $_.map( { .name } ).join( "\t→\t") } ) .join( "\n" ); This returns the same thing as we had before. But we’ve hidden all business logic (sorting, and anything else we might want) in the object capsule. ## But, have we? Not actually. Assignment should also be encapsulated in some class, and thoroughly tested. That’s, however, left for another occasion. ## Andrew Shitov: Raku Challenge, Week 92, Issue 1 ### Published by Andrew Shitov on 2020-12-22T08:24:00 This week’s task has an interesting solution in Raku. So, here’s the task: You are given two strings $A and $B. Write a script to check if the given strings are Isomorphic. Print 1 if they are otherwise 0. OK, so if the two strings are isomorphic, their characters are mapped: for each character from the first string, the character at the same position in the second string is always the same. In the stings abc and def, a always corresponds to d, b to e, and c to f. That’s a trivial case. But then for the string abca, the corresponding string must be defd. The letters do not need to go sequentially, so the strings aeiou and bcdfg are isomorphic too, as well as aeiou and gxypq. But also aaeeiioouu and bbccddffgg, or the pair aeaieoiuo and gxgyxpyqp. The definition also means that the number of different characters is equal in both strings. But it also means that if we make the pairs of corresponding letters, the number of unique pairs is also the same, right? If a matches x, there cannot be any other pair with the first letter a. Let’s exploit these observation: sub is-isomorphic($a, $b) { +(([==] ($a, $b)>>.chars) && ([==] ($a.comb, $b.comb, ($a.comb Z~ $b.comb))>>.unique)); } First of all, the strings must have the same length. Then, the strings are split into characters, and the number of unique characters should also be equal. But the collection of the unique pairs from the corresponding letters from both strings should also be of the same size. Test it: use Test; # . . . is(is-isomorphic('abc', 'def'), 1); is(is-isomorphic('abb', 'xyy'), 1); is(is-isomorphic('sum', 'add'), 0); is(is-isomorphic('ACAB', 'XCXY'), 1); is(is-isomorphic('AAB', 'XYZ'), 0); is(is-isomorphic('AAB', 'XXZ'), 1); is(is-isomorphic('abc', 'abc'), 1); is(is-isomorphic('abc', 'ab'), 0); * * * ## Raku Advent Calendar: Day 22: What’s the point of pointfree programming? ### Published by codesections on 2020-12-22T01:01:00 He had taken a new name for most of the usual reasons, and for a few unusual ones as well, not the least of which was the fact that names were important to him. — Patrick Rothfuss, The Name of the Wind If you’re a programmer, there’s a good chance that names are important to you, too. Giving variables and functions well chosen names is one of the basic tenets of writing good code, and improving the quality of names is one of the first steps in refactoring low-quality code. And if you both are a programmer and are at all familiar with Raku (renamed from “Perl 6” in 2019), then you are even more likely to appreciate the power and importance of names. This makes the appeal of pointfree programming – which advocates for removing many of the names in your code – a bit mysterious. Given how helpful good names are, it can be hard to understand why you’d want to eliminate them. This isn’t necessarily helped by some of the arguments put forward by advocates of pointfree programming (which is also sometimes called tacit programming). For example, one proponent of pointfree programming said: Sometimes, especially in abstract situations involving higher-order functions, providing names for tangential arguments can cloud the mathematical concepts underlying what you’re doing. In these cases, point-free notation can help remove those distractions from your code. That’s not wrong, but it’s also not exactly helpful; when reading that, I find myself thinking “sometimes; OK, when? in abstract situations; OK, what sort of situations?” And it seems like I’m not the only one with a similar set of questions, as the top Hacker News comment shows. Given arguments like these, I’m not at all surprised that many programmers dismiss pointfree programming in essentially the same way Wikipedia does: according to Wikipedia, pointfree programming is “of theoretical interest” but can make code “unnecessarily obscure”. This view – though understandable – is both mistaken and, I believe, deeply unfortunate. Programming in a pointfree style can make code far more readable; done correctly, it makes code less obscure rather than more. In the remainder of this post I’ll explain, as concretely as possible, the advantage of coding with fewer names. To keep myself honest, I’ll also refactor a short program into pointfree style (the code will be in Raku, but both the before and after versions should be approachable to non-Raku programmers). Finally, I’ll close by noting a handful of the ways that Raku’s “there’s more than one way to do it” philosophy makes it easier to write clear, concise, pointfree code (if you want to). ## The fundamental point of pointfree I said before that names are important, and I meant it. My claim is the one that G.K. Chesterton (or his dog) might have made if only he’d cared about writing good code: we should use fewer names not because names are unimportant but precisely because of how important names are. Let’s back up for just a minute. Why do names help with writing clear code in the first place? Well, most basically, because good names convey information. sub f($a, $b) may show you that you’ve got a function that takes two arguments – but it leaves you totally in the dark about what the function does or what role the arguments play. But everything is much clearer as soon as we add names: sub days-to-birthday($person, $starting-date). Suddenly, we have a much better idea what the function is doing. Not a perfect idea, of course; in particular, we likely have a number of questions of the sort that would be answered by adding types to the code (something Raku supports). But it’s undeniable that the names added information to our code. So if adding names adds info, it’ll make your code clearer and easier to understand, right? Well, sure … up to a point. But this is the same line of thinking that leads to pages and pages of loan “disclosures”, each of which is designed to give you more information about the loan. Despite these intentions, anyone who has confronted a stack of paperwork the approximate size of the Eiffel Tower can attest that the cumulative effect of this extra info is to confuse readers and obscure the important details. Excessive names in code can fall into the same trap: even if each name technically adds info, the cumulative effect of too many names is confusion rather than clarity. Here’s the same idea in different words: what names add to your code is not just extra info but also extra emphasis. And the thing about emphasis – whether it comes from bold, all-caps, or naming – is that it loses its power when overused. Giving everything a name is the same sort of error as writing in ALL-CAPS. Basically, don’t be this guy: <Khassaki>: HI EVERYBODY!!!!!!!!!! <Judge-Mental>: try pressing the Caps Lock key <Khassaki>: O THANKS!!! ITS SO MUCH EASIER TO WRITE NOW!!!!!!! <Judge-Mental>: f**k me  source (expurgation added, mostly to have an excuse to use the word expurgation). I believe that the fundamental benefit of using pointfree programming techniques to write code with fewer names is that it allows the remaining names to stand out more – which lets them convey more information than a sea of names would do. ## What does it mean to “understand” a line of code? Do you understand this line of Raku code? $fuel += $mass Let’s imagine how a very literal programmer – we’ll call them Literal Larry – might respond. (Literal Larry is, of course, not intended to refer to Raku founder Larry Wall. That Larry may have been accused of various stylistic flaws over the years, but never of excessive literalness.) Literal Larry might say, “Of course I understand what that line does! There’s a $fuel variable, and it’s incremented by the value of the $mass variable. Could it be any more obvious?”. But my response to Larry (convenient strawman that he is) would be, “You just told me what that line says, but not what it does. Without knowing more of the context around that line, in fact, we can’t know what that line does. Understanding that single – and admittedly simple! – line requires that we hold the context of other lines in our head. Worse, because it’s changing the value of one variable based on the value of another, understanding it requires us to track mutable state – one of the fastest ways to add complexity to a piece of code.” And that sets up my second claim about coding in a pointfree style: It often reduces the amount of context/state that you need in your head to understand any given line of code. Pointfree code reduces the reliance on context/state in two ways: first, to the extent that we totally eliminate some named variables, then we obviously no longer need to mentally track the state of those variables. Less obviously (but arguably more importantly), a pointfree style naturally pushes you towards limiting the scope of your variables and reduces the number to keep track of at any one time. (You’ll see this in action as we work through the example below.) ## A pointed example Despite keeping our discussion as practical as possible, I worry that it has drifted a bit away from the realm of the concrete. Let’s remedy that by writing some actual code! I’ll present some code in a standard procedural style, refactor it into a more pointfree style, and discuss what we get out of the change. But where should we get our before code? It needs to be decently written – my exchange with Literal Larry was probably enough strawmanning for one post, and I don’t want you to think that the refactored version is only an improvement because the original was awful. At the same time, it shouldn’t be great idiomatic Raku code, because that would mean using enough of Raku’s superpowers to reduce the code’s accessibility (I want to explain what’s going on in the after code, but don’t want to get bogged down teaching the before). It should also be just the right length – too short, and we won’t be able to see the advantages of reducing context; too long, and we won’t have space to walk through it in any detail. Fortunately, the Raku docs provide the perfect before code: the Raku by example 101 code. This simple script is not idiomatic Raku; it’s a program that does real (though minimal) work while using only the very basics of Raku syntax. Here’s how that page describes the script’s task: Suppose that you host a table tennis tournament. The referees tell you the results of each game in the format Player1 Player2 | 3:2, which means that Player1 won against Player2 by 3 to 2 sets. You need a script that sums up how many matches and sets each player has won to determine the overall winner. The input data (stored in a file called scores.txt) looks like this: Beth Ana Charlie Dave Ana Dave | 3:0 Charlie Beth | 3:1 Ana Beth | 2:3 Dave Charlie | 3:0 Ana Charlie | 3:1 Beth Dave | 0:3 The first line is the list of players. Every subsequent line records a result of a match. I believe that the code should be legible, even to programmers who have not seen any Raku. The one hint I’ll provide for those who truly haven’t looked at Raku (or Perl) is that @ indicates that a variable is array-like, % indicates that it’s hashmap-like, and $ is for all other variables. If any of the other syntax gives you trouble, check out the full walkthrough in the docs.

Here’s the 101 version:

use v6d;
# start by printing out the header.
say "Tournament Results:\n";

my $file = open 'scores.txt'; # get filehandle and... my @names =$file.get.words;   # ... get players.

my %matches;
my %sets;

for $file.lines ->$line {
next unless $line; # ignore any empty lines my ($pairing, $result) =$line.split(' | ');
my ($p1,$p2)          = $pairing.words; my ($r1, $r2) =$result.split(':');

%sets{$p1} +=$r1;
%sets{$p2} +=$r2;

if $r1 >$r2 {
%matches{$p1}++; } else { %matches{$p2}++;
}
}

my @sorted = @names.sort({ %sets{$_} }).sort({ %matches{$_} }).reverse;

for @sorted -> $n { my$match-noun = %matches{$n} == 1 ?? 'match' !! 'matches'; my$set-noun   = %sets{$n} == 1 ?? 'set' !! 'sets'; say "$n has won %matches{$n}$match-noun and %sets{$n}$set-noun";
}

OK, that was pretty quick. It uses my to declare 13 different variables; let’s see what it would look like if we declare 0. Before I start, though, one note: I said that the code above isn’t idiomatic Raku, and the code below won’t be either. I’ll introduce considerably more of Raku’s syntax where it makes the code more tacit, but I’ll still steer clear of some forms I’d normally use that aren’t related to refactoring the code in a more pointfree style. I also won’t make unrelated changes (e.g., removing mutable state) that I’d normally include. Finally, this code also differs from typical Raku (at least the way I write it) by being extremely narrow. I typically aim for a line length under 100 characters, but because I’d like this to be readable on pretty much any screen, these lines never go above 45.

With that throat-clearing out of the way, let’s get started. Our first step is pretty much the same as in the 101 code; we open our file and iterate through the lines.

open('scores.txt')
==> lines()

You can already see one of the key pieces of syntax we’ll be using to adopt a pointfree style: ==>, the feed operator. This operator takes the result from open('scores.txt') and passes it to lines() as its final argument. (This is similar to, but not exactly the same as, calling a .lines() method on open('scores'). Most significantly, ==> passes a value as the last parameter to the following function; calling a method is closer to passing a value as the first parameter.)

Now we’re dealing with a list of all the lines in our input file – but we don’t actually need all the lines, because some are useless (to us) header lines. We’ll solve this in basically the same way we would on the command line: by using grep to limit the lines to just those we care about. In this case, that means just those that have the ” | ” (space-pipe-space) delimiter that occurs in all valid input lines.

  ==> grep(/\s '|' \s/)

A few syntax notes in passing: first, Raku obviously has first-class support for regular expressions. Second, and perhaps more surprisingly, note that Raku regexes default to being _in_sensitive to whitespace; /'foo' 'bar'/ matches ‘foobar’, not ‘foo bar’. Finally, Raku regexes require non-alphabetic characters to be enclosed in 's before they match literally.

After using grep to limit ourselves to the lines we care about, we’re dealing with a sequence of lines something like Ana Dave | 3:0. Our next task is to convert these lines into something more machine readable. Since we just went over the regex syntax, let’s stick with that approach.

  ==> map({
m/ $<players>=( (\w+) \s (\w+) ) \s '|' \s$<sets-won>=((\d+) ':' (\d+) )/;
[$<players>[0],$<sets-won>[0]],
[$<players>[1],$<sets-won>[1]]
})

This uses the Regex syntax we introduced above and adds a bit on top. Most importantly, we’re now naming our capture groups: we have one capture group named players that captures the two space-separated player names before the | character. (Apparently our tournament only identifies players with one-word names, a limitation that was present in the 101 code as well.) And the sets-won named capture group extracts out the :-delimited set results.

Once we’ve captured the names and scores for that match, we associate the correct scores with the correct names and create a 2×2 matrix/nested array with our results.

Actually, though, we’re not quite done with everything we want to do inside this map – we’ve given meaning to the order of the elements within each row, but the order of the rows themselves is currently meaningless. Let’s fix that by sorting our returned array so that the winner is always at the front:

      [$<players>[0],$<sets-won>[0]],
[$<players>[1],$<sets-won>[1]]
==> sort({-.tail})

With this addition, our code so far is:

open('scores.txt')
==> lines()
==> grep(/\s '|' \s/)
==> map({
m/ $<players>=( (\w+) \s (\w+) ) \s '|' \s$<sets-won>=((\d+) ':' (\d+) )/;
[$<players>[0],$<sets-won>[0]],
[$<players>[1],$<sets-won>[1]]
==> sort({-.tail})
})

At this point, we’ve processed our input lines into arrays; we’ve gone from something like Ana Dave | 3:0 to something a bit like

[ [Ana,  3],
[Dave, 0] ]

Now it’s time to start combining our separate arrays into a data structure that represents the results of the entire tournament. As in most languages these days, Raku does this with reduce (some languages call the same operation fold). We’re going to use reduce to build a single hashmap out of our list of nested arrays. However, before we can do so, we’re going to need to add an appropriate initial value to reduce onto (here, an empty Hash).

Raku gives us a solid half-dozen ways to do so – including specifying an initial value when you call reduce, much like you would in modern JavaScript. I’m going to accomplish the same thing differently, both because it’s more fun and because it lets me introduce you to 5 useful pieces of syntax in just 10 characters, which may be some sort of a record. Here’s the line:

  ==> {%, |$_}() OK, there’s a lot packed in there! Let’s step through it. {...} is Raku’s anonymous block (i.e., lambda) syntax. So {...}() would normally create an anonymous block and then call it without any arguments. However, as we said above, ==> automatically passes the return value of its left-hand side as the final argument to its right-hand side. So ==> {...}() calls the block with the value that was fed into the ==>. Since this block doesn’t specify a signature (more on that very shortly), it doesn’t have any named parameters at all; instead, any values the block is called with are placed in the topic variable – which is accessed with $_. Putting what we have so far together, we can show a complex (but succinct!) way to do nothing: ==> {$_}(). That expression feeds a value into a block, loads the value into the topic variable, and then returns it without doing anything at all. Our line did something, however – after all, we have 4 more characters and 2 new concepts left in our line! Starting at the left, we have the % character, which you may recognize as the symbol that indicates that a variable is hash-like (Associative, if we’re being technical). On its own like this, it effectively creates an empty hash – which we could also have done with Hash.new, {}, or %(), but I like % best here. And the , operator, which we’ve already used without remarking on, combines its arguments into a list. Here’s an example using the syntax we’ve covered so far: [1, 2, 3] ==> {0,$_}()

That would build a list out of 0 and [1, 2, 3]. Specifically, it would build a two-element list; the second element would be the array [1, 2, 3]. That is not quite what we want, because we want to add % onto the front of our existing list instead of creating a new and more nested list.

As you may have guessed, the final character we have left – | – solves this problem for us. This Slip operator is one of my favorite bits of Raku cleverness. (Well, top 20 anyway – there are a lot of contenders!) The | operator transforms a list into a Slip, which is “a kind of List that automatically flattens”, as the docs put it. In practice, this means that Slips merge into lists instead of becoming single elements in them. To return to our earlier example,

[1, 2, 3] ==> {0, |$_}() produces the four-element list (0, 1, 2, 3) instead of the two-element list (0, [1, 2, 3]) we got without the |. Putting all this together, we are now in a position to easily understand the ~15 character (!) line of code we’ve been talking about. Recall that we’d just used map to transform our list of lines into a list of 2×2 matrices. If we’d printed them out, we would see something kind of like: ( [ [Ana, 3], [Dave, 0] ], ... ) When we feed this array into the {%, |$_}() block, we slip it into a list with the empty hash, and end up with something like:

( {},
[ [Ana,  3],
[Dave, 0] ],
...
)

With that short-but-dense line out of the way, we can proceed on to calling reduce. As in many other languages, we’ll pass in a function for reduce to use to combine our values into a single result value. We’ll do this with the block syntax we just introduced (see, taking our time on that line is already starting to pay off!). So it will look something like this:

==> reduce( {
# Block body goes here
})

Before filling in that body, though, let’s say a word about signatures (I told you it’d come up soon). As we discussed, when you don’t specify a signature for a block, all the arguments passed to the block get loaded into the topic variable $_. We can do anything we need to by manipulating/indexing into the topic variable, but that could get pretty verbose. Fortunately, we can specify the signature for a block by placing parameter names between -> and the opening {. Thus, ->$a, $b {$a + $b } is a block that accepts exactly two arguments and returns the sum of its arguments. In our case, we know that the first argument to reduce is going to be the hash we’re building up to track the total wins in the tournament and the second will be the 2×2 array that represents the results of the next match. That gives us a signature of ==> reduce(-> %wins, @match-results { # Block body goes here }) So, how do we fill in the body? Well, since we previously sorted the array we’re now calling @match-results, we know that the first row contains the person who won the most sets (and therefore the match). More specifically, the first element in the first row contains that person’s name. So we want the first element of the first row – that is, the element that would be at (0, 0) if our array were laid out in 2D. Fortunately, Raku supports directly indexing into multi-dimensional arrays, so accessing this name is as simple as @match-results[0;0]. This means we can update our hash to account for the match winner with  %wins{@match-results[0;0]}<matches>++; Handling the sets is very similar – the biggest difference is that we iterate through both rows of @match-results instead of indexing into the first row:  for @match-results -> [$name, $sets] { %wins{$name}<sets> += $sets; } Note the -> [$name, $sets] signature above. This shows Raku’s strong support for destructuring assignment, another key tool in avoiding explicit assignment statements. -> [$a, $b] tells Raku that the block accepts a single array with two elements in it and assigns names to each. It’s equivalent to writing -> @array { my$a = @array[0]; my $b = @array[1]; ... }. (And if the idea of using destructuring assignment to avoid assignment feels like cheating in terms of pointfree style, then hold that thought because we’ll come back to it when we get to the end of this example.) At the end of our reduce block, we need to return the %wins hash we’ve been building. Putting it all together gives us  ==> reduce(-> %wins, @match-results { %wins{@match-results[0;0]}<matches>++; for @match-results -> [$name, $sets] { %wins{$name}<sets> += $sets; } %wins }) At this point, we’ve built a hash-of-hashes that contains all the info we need; we’re done processing our input. Specifically, our hash contains keys for each of the player names in the tournament; the value of each is a hash showing that player’s total match and set wins. It looks a bit like this: { Ana => { matches => 2, sets => 8 } Dave => ..., ... } This contains all the information we need but not necessarily in the easiest shape to work with for generating our output. Specifically, we would like to print results in a particular order (winners first) but we have our data in a hash, which is inherently unordered. Thus – as happens so often – we need to reshape our data from the shape that was the best fit for processing our input data into the shape that is the best fit for generating our output. Here, that means going from a hash-of-hashes to a list of hashes. We do so by first transforming our hash into a list of key-value pairs and then mapping that list into a list of hashes. In that map, we need to add the player’s name (info that was previously stored in the key of the outer hash) into the inner hash – if we skipped that step, we wouldn’t know which scores went with which players. Here’s how that looks:  ==> kv() ==> map(->$name, %_ { %{:$name, |%_} }) I’ll note, in passing, that our map uses both destructuring assignment and the | slip operator to build our new hash. After this step, our data looks something like ( { name => "Ana", matches => 2, sets => 8 } ... ) This list isn’t inherently unordered the way a hash is, but we haven’t yet put it in any meaningful order. Let’s do so now.  ==> sort({.<matches>, .<sets>, .<name>}) ==> reverse() Note that this preserves the somewhat wackadoodle sort order from the original code: sort by match wins, high to low; break ties in matches by set wins; break ties in set wins by reverse alphabetical order. At this point, we have all our output data organized properly; all that is left is to format it for printing. When printing our output, we need to use the correct singular/plural affixes – that is, we don’t want to say someone won “1 sets” or “5 set”. Let’s write a simple helper function to handle this for us. We could obviously write a function that tests whether we need a singular or plural affix, but instead let’s take this chance to look at one more Raku feature that makes it easier to write pointfree code: multi-dispatch functions that perform different actions based on how they’re called. The function we want should accept a key-value pair and return the singular version of the key when the associated value is 1; otherwise, it should return the plural version of the key. Let’s start by stating what all versions of our function have in common using a proto statement: proto kv-affix((Str, Int$v) --> Str) {{*}}

A few things to know about that proto statement: This is the first time we’ve added type constraints to our code, and they work just about as you’d expect. kv-affix can only be called with a string as its first argument and an integer as its second (this protects us from calling it with the key and value in the wrong order, for example). It’s also guaranteed to return a string. Additionally, note that we can destructure using a type (Str, here) without needing to declare a variable – handy for situations like this, where we want to match on a type without needing to use the value.

Finally, note that the proto is entirely optional; indeed, I don’t think that I’d necessarily use one here. But I would have felt remiss if we didn’t discuss Raku’s support for type constraints, which is generally quite helpful in writing pointfree code (even if we haven’t really needed it today).

Next, let’s handle the case where we need to return the singular version of the key:

multi kv-affix(($_, 1)) { S/e?s$// }

As you can see, Raku lets us destructure/pattern match with literals – this version of our multi will only be invoked when kv-affix is called with 1 as its second argument. Additionally, notice that we’re destructuring the first parameter into $_, the special topic variable. Setting the topic variable not only lets us use that variable without giving it a name, but it also enables all the tools Raku reserves for the current topic. (If we want these tools without destructuring into the topic variable, we can also set the topic with with or given.) Setting the topic to the key we’re modifying is helpful here because it lets us use the S/// non-destructive substitution operator. This operator matches a regex against the topic and then returns the string that results from replacing the matched portion of the string. Here, we match 0 or 1 e’s (e?) followed by an ‘s’, followed by the end of the string ($). We then replace that ‘s’ or ‘es’ with nothing, effectively trimming the plural affix from the string.

The final multi candidate is trivial. It just says to return the unaltered plural key when the previous multi candidate didn’t match (that is, when the value isn’t 1).

multi kv-affix(($k,$)) { $k } (We use $ as a placeholder for a parameter when we don’t need to care about its type or its value.)

With those three lines of code, we now have a little helper function that will give us the correct singular/plural version of our keys. In all honesty, I’m not sure it was actually worth using a multi here. This might be a situation where a simple ternary condition – something like sub kv-affix(($_,$v)) { $v ≠ 1 ??$_ !! S/e?s$// } – might have done the trick more concisely and just as clearly. But that wouldn’t have given us a reason to talk about multis, and those are just plain fun. In any event, now that we have our helper function, formatting each line of our output is fairly trivial. Below, I do so with the venerable C-style sprintf, but Raku offers many other options for formatting textual output if you’d prefer something else.  ==> map({ "%s has won %d %s and %d %s".sprintf( .<name>, .<matches>, kv-affix(.<matches>:kv), .<sets>, kv-affix(.<sets>:kv ) )}) And once we’ve formatted each line of our output, the final step is to add the appropriate header, concatenate our output lines, and print the whole thing.  ==> join("\n", "Tournament Results:\n") ==> say(); And we’re done. ## Evaluating our pointfree refactor Let’s take a look at the code as a whole and talk about how it went. use v6d; open('scores.txt') ==> lines() ==> grep(/\s '|' \s/) ==> map({ m/$<players>=( (\w+)  \s (\w+) )
\s    '|'  \s
$<sets-won>=((\d+) ':' (\d+) )/; [$<players>[0], $<sets-won>[0]], [$<players>[1], $<sets-won>[1]] ==> sort({-.tail}) }) ==> {%, |$_}()
==> reduce(-> %wins, @match-results {
%wins{@match-results[0;0]}<matches>++;
for @match-results -> [$name,$sets] {
%wins{$name}<sets> +=$sets;
}
%wins })
==> kv()
==> map(-> $name, %_ { %{:$name, |%_} })
==> sort({.<matches>, .<sets>, .<name>})
==> reverse()
==> map({
"%s has won %d %s and %d %s".sprintf(
.<name>,
.<matches>, kv-affix(.<matches>:kv),
.<sets>,    kv-affix(.<sets>:kv) )})
==> join("\n", "Tournament Results:\n")
==> say();

proto kv-affix((Str, Int) --> Str) {{*}}
multi kv-affix(($_, 1)) { S/e?s$// }
multi kv-affix(($k,$)) { $k } So, what can we say about this code? Well, at 32 lines of code, it’s longer than the 101 version (and, even though these lines are pretty short, it’s longer by character count as well). So this version doesn’t win any prizes for concision. But that was never our goal. So how does it do on the goal we started out with – reducing assignments? Well, if we channel Literal Larry, we can say that it has zero assignment statements; it never assigns a value to a variable with my$name = 'value' or similar syntax. In contrast, the 101 code used my to assign to a variable over a dozen times. So, from a literal perspective, we succeeded.

But, as we already noted, ignoring destructuring assignment feels very much like cheating. Similarly, using named captures in a regex is essentially a form of assignment/naming. So, if we adopt an inclusive view of assignment, the 101 code has 15 assignments and our refactored code has 6. So a significant drop, but nothing like an order of magnitude difference.

But trying to evaluate our refactor by counting assignment statements is probably a fool’s errand to begin with. What I really care about – and, I suspect, what you care about too – is the clarity of our code. To some degree, that’s inherently subjective and depends on your personal familiarity and preferences – by my lights, ==> {%, |$_}() is extremely clear. Maybe, after we spent 3 paragraphs on that line, you might agree; or you might not – and I doubt anything further I could say would change your mind. So, by my lights, the refactored code looks clearer. But clarity is not entirely a subjective matter. I argue that the refactored code is objectively clearer – and in exactly the ways the pointfree style is supposed to promote. Back at the beginning of this post, I claimed that writing tacit code has two main benefits: it provides better emphasis in your code, and it reduces the amount of context you need to hold in your head to understand any particular part of the code. Let’s look at each of these in turn. In terms of emphasis, there’s one question I like to ask: what identifiers are in scope at the global program (or module) scope? Those identifiers receive the most emphasis; in an ideal world, they would be the most important. In the refactored code, there are no variables at all in the global scope and only one item: the kv-affix function. This function is appropriately in the global scope since it is of global applicability (indeed, it could even be a candidate to be factored out into a separate module if this program grew). Conversely, in the 101 code the global-scope variables are $file, @names, %matches, %sets, and @sorted. At least a majority of those are pure implementation details, undeserving of that level of emphasis. And some (though this bleeds into the “context” point, discussed below) are downright confusing in a global scope. What does @names refer to, globally? How about %matches? (does it change your answer if I tell you that Match is a Raku type?) What about %sets? (also a Raku type). Of course, you could argue that these names are just poorly chosen, and I wouldn’t necessarily disagree. But coming up with good variable names is famously hard, and figuring out names that are clear in a global scope is even harder – there are simply more opportunities for conceptual clash.

To really emphasize this last point, take a look at the final line of the refactored code:

multi kv-affix(($k,$)) { $k } If the name $k occurred in a global context, it would be downright inscrutable. It could be an iteration variable (old-school programmers tend to start with i, and then move on to j and k). It could stand for degrees Kelvin or, oddly enough, the Coulomb constant. Or it could be anything, really.

But because its scope is more limited, the meaning is clear. The function takes a key-value pair (typically generated in Raku with the .kv method or the :kv adverb) and is named kv-affix. Given those surroundings, it’s no mystery at all that $k stands for “key”. Keeping items out of the global scope both provides better emphasis and provides a less confusing context to evaluate the meaning of different names. The second large benefit I claimed for pointfree code is that it reduces the amount of context/state you need to hold in your head to understand any given bit of code. Comparing these two scripts also supports this point. Take a look at the last line of the 101 code: say "$n has won %matches{$n}$match-noun and %sets{$n}$set-noun";

Mentally evaluating this line requires you to know the value of $n (defined 3 lines above), $match-noun (2 lines above), $set-noun (1 line), %sets (24 lines), and %matches (25 lines). Considering how simple this script is, that is a lot of state to track! In contrast, the equivalent portion of the refactored code is "%s has won %d %s and %d %s".sprintf( .<name>, .<matches>, kv-affix(.<matches>:kv), .<sets>, kv-affix(.<sets>:kv) ) Evaluating the value of this expression only requires you to know the value of the topic variable (defined one line up) and the pure function kv-affix (defined 3–5 lines below). This is not an anomaly: every variable in the refactored code is defined no more than 5 lines away from where it is last used. (Of course, writing code in a pointfree style is neither sufficient nor necessary to limit the scope of variables. But as this example illustrates – and my other experience backs up – it certainly helps.) ## Raku supports pragmatic (not pure) pointfree programming A true devotee of pointfree programming would likely object to the refactored code on the grounds that its not nearly tacit enough. Despite avoiding explicit assignment statements, it makes fairly extensive use of named function parameters and destructuring assignment; it just isn’t pure. Nevertheless, the refactored code sits in a pragmatic middle ground that I find highly productive: it’s pointfree enough to gain many of the clarity, context, and emphasis benefits of that style without being afraid to use a name or two when that adds clarity. And this middle ground is exactly where Raku shines (at least in my opinion! It’s entirely possible to write Raku in a variety of different styles and many of them are not in the least bit pointfree). Here are some of the Raku features that support pragmatic pointfree programming (most, but not all, of which we saw above): If you’re already a Raku pro, I hope this list and this post have given you some ideas for some other ways to do it. If you’re new to Raku, I hope this post has gotten you excited to explore some of the ways Raku could expand the way you program. And if you’re totally uninterested in writing Raku code – well, I hope you’ll reconsider, but even if you don’t, I hope that this post gave you something to think about and left you with some ideas to try out in your language of choice. ## Raku Advent Calendar: Day 21: The Story Of Elfs, and Roles, And Santas’ Enterprise ### Published by vrurg on 2020-12-21T01:01:00 Let’s be serious. After all, we’re grown up people and know the full truth about Santa: he is a showman, and he is a top manager of Santa’s family business. No one knows his exact position, because we must not forget about Mrs.Santa whose share in running the company is at least equal. The position is not relevant to our story anyway. What is important though is that running such a huge venture requires a lot of skills. Not to mention that the venture itself is also a tremendous show on its own, as one can find out from documentaries like The Santa Clause and many other filmed over the last several decades of human history. What would be the hardest part of running The North Pole Inc.? Logistics? Yeah, but with all the magic of the sleds, and the reindeers, and the Christmas night this task is not that hard to be done. Manufacturing? This task has been delegated to small outsourcing companies like Lego, Nintendo, and dozens others across the globe. What else remains? The employees. Elves. And, gosh!, have you ever tried to organize them? Don’t even think of trying unless you have a backup in form of a padded room served by polite personnel with reliable supply of pills where you’d be spending your scarce vacation days. It’s an inhumane task because when one puts together thousands, if not millions, as some estimations tell, of ambitious stage stars (no elf would ever consider himself as a second-plane actor!), each charged with energy amount more appropriate to a small nuclear reactor… You know… How do Santas manage? Sure, they’re open-hearted, all-forgiving beyond an average human understanding. But that’s certainly not enough to build a successful business! So, there must be a secret ingredient, something common to both commercial structures and shows. And I think it’s well done role assignment which turns the brownian motion of elf personalities into a self-organized structure. In this article I won’t be telling how Raku helps Santas sort things out or ease certain tasks. Instead I’ll try to describe some events happening within The North Pole company with help of the Raku language and specifically its OO capabilities. ## Elves Basically, an elf is: class Elf is FairyCreature {...} For some reason, many of them don’t like this definition; but who are we to judge them as long, as many humans still don’t consider themselves as a kind of apes? Similarly, some elves consider fairy to be archaic and outdated and not related to them, modern beings. But I digress… The above definition is highly oversimplified. Because if we start traversing the subtree of the FairyCreature class we gonna find such diverse species in it like unicorns, goblins, gremlins, etc., etc., etc. Apparently, there must be something else, defining the difference. Something what would provide properties sufficiently specific to each particular kind of creature. If we expand the definition of the Elf class we gonna see lines like these: class Elf is FairyCreature { also does OneHead; also does UpperLimbs[2]; also does LowerLimbs[2]; also does Magic; ... } I must make a confession here: I wasn’t allowed to see the full sources. When requested for access to the fairy repository the answer was: “Hey, if we reveal everything it’s not gonna look like magic anymore!” So, some code here is real, and some has been guessed out. I won’t tell you which is which; after all, let’s keep it magic! Each line is a role defining a property, or a feature, or a behavior intrinsic to a generic elf (don’t mess it up with the spherical cow). So, when we see a line like that we say that: Elf does Magic ; or, in other words: class Elf consumes role Magic. I apologize for not explaining in details to Raku newcomers what a role is; hopefully the link will be helpful here. For those who knows Java (I’m a dinosaur, I don’t), a role is somewhat similar to interfaces but better. It can define attributes and methods to be injected into a consuming class; it can require certain methods to be defined; and it can specify what other roles the class will consume, and other classes it will inherit from. As a matter of fact, because of complexity of elf species, the number of roles they do is too high to mention all of them here. Normally, when a class consumes only few of them, it’s OK to write the code in another way: class DisgustingCreature is FairyCreature does Slimy does Tentacles[13] { ... } But numerous roles are better be put into class body with the prefix also. ## When Santas Hire An Elf This would probably be incorrect to say that elves are hired by Santas. In fact, as we know it, all of them do work for The North Pole Inc. exclusively. Yet, my thinking is such that at some point in the history the hiring did take place. Let’s try to imagine how it could have happen. One way or another, there is a special role: role Employee {...} And there is a problem too: our class Elf is already composed and immutable. Moreover, each elf is an object of that class! Or, saying the same in Raku language: $pepper-minstix.defined && $pepper-minstix ~~ Elf. Ok, what’s the problem? If one tries to $pepper-minstix.hire(:position("security officer"), :company("The North Pole Inc."), ...) a big boom! will happen because of no such method ‘hire’ for invocant of type ‘Elf’. Surely, the boom! is expected because long, long time ago elves and work were as compatible as Christmas and summer! But then there was Santas. And what they did is called mixing a role into an object:

$pepper-minstix does Employee; From the outside the operation does adds content of Employee role to the object on its left hand side, making all role attributes and methods available on the objet. Internally it creates a new class for which the original $pepper-minstix.WHAT is the only parent class; and which consumes the role Employee. Eventually, after the does operator, say $pepper-minstix.WHAT will output something like (Elf+{Employee}). This is now the new class of the object held by $pepper-minstix variable.

Such a cardinal change in life made elves much happier! Being joyful creatures anyway, they now got a great chance to be also useful by sharing their joy with all the children, and sometimes not only children. The only thing worried them though. You know, it’s really impossible to find two identical people; the more so there’re no two identical elves. But work? Wouldn’t it level them all down in a way? Santas wouldn’t be the Santas if they didn’t share these worries with their new friends. To understand their solution let’s see what Employee role does for us. Of the most interest to us are the following lines:

has $.productivity; has$.creativity;
has $.laziness; has$.position;
has $.speciality; has$.department;
has $.company; For simplicity, I don’t use typed attributes in the snippet, though they’re there in the real original code. For example, $.lazyness attribute is a coefficient among other things used in a formula calculating how much time is spent for coffee or eggnog breaks. The core of the formula is something like:

method todays-cofee-breaks-length {
$.company.work-hours *$.laziness * (1 + 0.2.rand)
}

Because they felt their responsibility for the children, elves agreed to limit their maximum laziness level. Therefore the full definition of the attribute is something like:

has Num:D $.laziness where * < 0.3; If anybody thinks that the maximum is too high then they don’t have the Christmas spirit in their hearts! Santa Claus was happy about it, why wouldn’t we? I personally sure his satisfaction is well understood because his own maximum is somewhere closer to 0.5, but – shh! – let’s keep it a secret! Having all these characteristics in place, Santas wanted to find a way to set them to as diverse combinations, as possible. And here is what they came up with something similar to this: role Employee { ... method my-productivity {...} method my-creativity {...} method my-laziness {...} submethod TWEAK {$!productivity //= self.my-productivity;
$!creativity //= self.my-creativity;$!laziness //= self.my-laziness;
}
}

Now it was up to an elf to define their own methods to set the corresponding characteristics. But most of them were OK with a proposed special role for this purpose:

role RandomizedEmployee {
method my-productivity { 1 - 0.3.rand }
method my-creativity { 1 - 0.5.rand }
method my-laziness { 1 - 0.3.rand }
}

The hiring process took the following form now:

$pepper-minstix does Employee, RandomizedEmployee; But, wait! We have three more attributes left behind! Yes, because these were left up to Santas to fill in. They knew what kind of workers and where they needed most. Therefore the final version of the hiring code was more like: $pepper-minstix does Employee(
:company($santas-company), :department($santas-company.department("Security")),
:speciality(GuardianOfTheSecrets),
...
),
RandomizedEmployee;

With this line the Raku’s mixin protocol does the following:

1. creates a new mixin
2. sets attributes defined with named parameters
3. invoke role’s constructor TWEAK
4. returns a new employee object

Because everybody knew that the whole thing is going to be a one-time venture as elves would never leave their new boss alone, the code was a kind of trade-off between efficiency and speed of coding. Still, there were some interesting tricks used, but discussing them is beyond this story mainline. I think many readers can find their own solutions to the problems mentioned here.

Me, in turn, moves on to a story which took place not long ago…

## When Time Is Too Scarce

It was one of those craziest December days, when Mrs.Santa left for an urgent business trip. Already busy with mails and phone calls Mr.Santa got additional duties in the logistics and the packing departments which are usually handled by his wife. There were no way he could skip those or otherwise risks of something going wrong on the Christmas night would be too high. The only way to get everywhere on time was to cut on the phone calls. It meant to tell the elf-receptionist to answer with the “Santa is not available” message.

Santa sighed. He could almost see and hear the elf staring at him with deep regret and asking: “Nicholas, are you asking me to lie?” Oh, no! Of course he wouldn’t ask, but…

But? but! After all, even if the time/space magic of the Christmas is not available on other days of year, Santa can still do other kind of tricks! So, here is what he did:

role FirewallishReceptionist {
has Bool $.santa-is-in-the-office; has Str$.not-available-message;
if $.santa-is-in-the-office { self.transfer-call:$.santas-number;
}
else {
self.reply-call: $.not-available-message, :record-reply, :with-marry-christmas; } } } my$strict-receptionist =
$receptionist but FirewallishReceptionist( :!santa-is-in-the-office, :not-available-message( "Unfortunately, Santa is not available at the moment." ~ ... #{ the actual message is longer than this } ) );$company.give-a-day-off: $receptionist;$company.santa-office-frontdesk.assign: :receptionist($strict-receptionist); The operator but is similar to does, but instead of altering its left hand side operand its clone is created and then mixes in the right hand side role into the clone. Just imagine the amazement of the receptionist when he saw his own copy taking his place at his desk! But a day off is a day off, he wasn’t really much against applying his laziness coefficient to the rest of that day… As to Santa himself… He has never been really proud of what he done that day. Even though it was needed in the name of saving the Christmas. Besides, the existence of a clone created a few awkward situations later, especially when both elves were trying to do the same work while still sharing some data structures. But that’s a separate story on its own… ## When New Magic Helps Have you seen the elves this season? They’re always very strict in sticking to the latest tendencies of Christmas fashion: rich colors, spangles, all fun and joy! Yet this year is really something special! It all started at the end of the spring. Santa was sitting in his chair, having well-deserved rest of the last Christmas he served. The business did not demand as much attention as it usually does at the end of the autumn. So, he was sitting by the fireplace, drinking chocolate, and reading the news. Though the news was far from being the best part of Santa’s respite (no word about 2020!). Eventually, Santa put away his tablet, made a deep sip from his giant mug, and said aloud: “Time to change their caps!” No idea what pushed him to this conclusion, but from this moment on elves knew that a new fashion is coming! The idea Santa wanted to implement was to add WiFi connection and LEDs to elvish caps, and to make the LEDs twinkle with patterns available from a local server of The North Pole Inc. Here is what he started with: role WiFiConnect { has$.wifi-name is required;
has $.wifi-user is required; has$.wifi-password is required;
submethod TWEAK {
self.connect-wifi( $!wifi-name,$!wifi-user, $!wifi-password ); } } role ShinyLEDs { submethod TWEAK { if self.test-cirquits { self.LED( :on ); } if self ~~ WiFiConnect { self.set-LED-pattern: self.fetch( :config-key<LED-pattern> ); } } } class ElfCap2020 is ElfCap does WiFiConnect does ShinyLEDs {...} Note, please, that I don’t include the body of the class here for it’s too big for this article. But the attempt to compile the code resulted in: Method 'TWEAK' must be resolved by class ElfCap2020 because it exists in multiple roles (ShinyLEDs, WiFiConnect)  “Oh, sure thing!” – Santa grumbled to himself. And added a TWEAK submethod to the class:  submethod TWEAK { self.R1::TWEAK; self.R2::TWEAK; } This made the compiler happy. and ElfCap2020.new came up with a new and astonishingly fun cap instance! “Ho-ho-ho!” – Santa couldn’t help laughing of joy. It was the time to start producing the new caps for all company employees; and this was the moment when it became clear that mass production of the new cap will require coordinated efforts of so many third-party vendors and manufacturers that there were no way to equip everybody with the new toy by the time the Christmas comes. Does Santa give up? No, he never does! What if we try to modernize the old caps? It would only require so many LEDs and controllers and should be feasible to handle on time! Suit the action to the world! With a good design it should be no harder than to: $old-cap does (WiFiConnect(:$wifi-name, :$wifi-user, :$wifi-password), ShinyLEDs); And… Boom! Method 'TWEAK' must be resolved by class ElfCap+{WiFiConnect,ShinyLEDs} because it exists in multiple roles (ShinyLEDs, WiFiConnect) Santa sighed. No doubt, this was expected. Because does creates an implicit empty class the two submethods from both roles clash when compiler tries to install them into the class. A deadend? No way! Happy endings is what Santa loves! And he knows what to do. He knows that there is a new version of the Raku language is in development. It is not released yet, but is available for testing with Rakudo compiler if requested with use v6.e.PREVIEW at the very start of a compilation unit which is normally is a file. Also, Santa knows that one of the changes the new language version brings in is that it keeps submethods where they were declared, no matter what. It means that where previously a submethod was copied over from a role into the class consuming it, it will now remain be the sole property of the role. And the language itself now takes care of walking over all elements of a class inheritance hierarchy, including roles, and invoking their constructor and/or destructor submethods if there’re any. Not sure what it means? Check out the following example: use v6.e.PREVIEW; role R1 { submethod TWEAK { say ::?ROLE.^name, "::TWEAK" } } role R2 { submethod TWEAK { say ::?ROLE.^name, "::TWEAK" } } class C { }; my$obj = C.new;
$obj does (R1, R2); # R1::TWEAK # R2::TWEAK Apparently, adding use v6.e.PREVIEW at the beginning of the modernization script makes the $old-cap does (WiFiConnection, ShinyLEDs); line work as expected!

Moreover, switching to Raku 6.e also makes submethod TWEAK unnecessary for ElfCap2020 class too if its only function is to dispatch to role TWEAKs. Though, to be frank, Santa kept it anyway as he needed a few adjustments to be done at the construction time. But the good thing is that it wasn’t necessary for him to worry that much about minor details of combining all the class components together.

And so the task was solved. At the first stage all the old caps were modernized and made ready before the season started and Christmas preparations took all remaining spare time of The North Pole Inc. company. The new caps will now be produced without extra fuss and be ready by the 2021 season. The time spared Santa used to adapt WiFiConnection and ShinyLEDs to use them with his sleds too. When told by The Security Department that additional illumination makes sled’s camouflaging much harder if ever possible Santa only shrugged and replied: “You’ll manage, I have my trust in you!” And they did, but that’d be one more story…

## Happy End

When it comes to The North Pole it’s always hard to tell the truth from fairy tales, and to separate magic from the science. But, after all, a law of nature tells it that any sufficiently advanced technology is indistinguishable from magic. With Raku we try to bring a little bit of good magic into this life. It is so astonishing to know that Raku is supported by nobody else but the Santa family themselves!

Merry Christmas and Happy New Year!

## Andrew Shitov: Advent of Code 2020 Day 18/25 in the Raku programming language

Today there’s a chance to demonstrate powerful features of Raku on the solution of Day 18 of this year’s Advent of Code.

The task is to print the sum of a list of expressions with +, *, and parentheses, but the precedence of the operations is equal in the first part of the problem, and is opposite to the standard precedence in the second part.

In other words, 3 + 4 * 5 + 6 is (((3 + 4) * 5) + 6) in the first case and (3 + 4) * (5 + 6) in the second.

Here is the solution. I hope you are impressed too.

use MONKEY-SEE-NO-EVAL;

sub infix:<m>($a,$b) { $a *$b }

say [+] ('input.txt'.IO.lines.race.map: *.trans('*' => 'm')).map: {EVAL($_)} The lines with the expressions come from the file input.txt. For each line, I am replacing * with m, which I earlier made an infix operator that actually does multiplication. For the second part, we need our m to have lower precedence than +. There’s nothing simpler: sub infix:<m>($a, $b) is looser<+> {$a * $b } Parsing and evaluation are done using EVAL. * * * ## Andrew Shitov: The second wave of Covid.observer ### Published by Andrew Shitov on 2020-12-15T22:15:33 When I started covid.observer about seven months ago, I thought there would be no need to update it after about 3-4 months. In reality, we are approaching to the end of the year, and I will have to fix the graphs which display data per week, as the week numbers will very soon make a loop. All this time, more data arrived, and I also made it even more by adding a separate statistics for the regions of Russia, with its 85 subdivisions, which brought the total count of countries and regions up to almost 400. mysql> select count(distinct cc) from totals; +--------------------+ | count(distinct cc) | +--------------------+ | 392 | +--------------------+ 1 row in set (0.00 sec) Due to frequent updates that changes data in the past, it is not that easy to make incremental update of statistics, and again, I did not expect that I’ll run the site for so long. mysql> select count(distinct date) from daily_totals; +----------------------+ | count(distinct date) | +----------------------+ | 329 | +----------------------+ 1 row in set (0.00 sec) The bottom line is that daily generation became clumsy and not smooth. Before summer, the whole website could be regenerated in less than 15 minutes, but now it turned to 40-50 minutes. And I tend to do it twice a day, as a fresh portion of today’s Russian data arrives a few hours after we’ve got a daily update by the Johns Hopkins University (for yesterday’s stats). But the most scary signals began after the program started crashing with quite unpleasant errors. Latest JHU data on 12/12/20 Latest RU data on 12/13/20 Generating impact timeline... Generating world data... MoarVM panic: Unable to initialize event loop Failed to open file /Users/ash/Projects/covid.observer/COVID-19/csse_covid_19_data/csse_covid_19_daily_reports_us/12-10-2020.csv: Too many open files Generating impact timeline... Generating world data... Not enough positional arguments; needed at least 4 in sub per-capita-data at /Users/ash/Projects/covid.observer/lib/CovidObserver/Statistics.rakumod (CovidObserver::Statistics) line 1906 The errors were not consistent, and I managed to re-run the program by pieces to get the update. But none of the errors were easily explainable. MoarVM panic gives no explanation, but it completely disappears if I run the program in two parts: $ ./covid.raku fetch
$./covid.raku generate instead of a combined run that both fetches the data and generates the statistics: $ ./covid.raku update

The Too many open files is a strange one as while I process the files in loops, I do not intentionally keep them open. But that error seems to be solved by changing system settings:

$ulimit -n 10000 The final error, Not enough positional arguments; needed at least 4, is the weirdest. Such thing happens when you call a function that expects a different number of arguments. That never occurred for months after all bugs were found and fixed. It can only be explained by the new piece of data. Indeed, it may happen that some data is missing, but I believe I already found all the cases where I need to provide the function calls with default zero values. Having all that, and the fact that the program run takes dozens of minutes before you can catch an error, it was quite frustrating. And here comes Liz! She proposed to look into the things and actually spent the whole day by first installing the code and all its requirements and then by actually doing that job to run, debug, and re-run. By the end of the day she created a pull request, which made the program twice as fast! Let’s look at the changes. There are three of them (but no, they do not directly answer the above-mentioned three error messages). The first two changes introduce parallel processing of countries (remember, there are about 400 of what is considered a unique $cc in the program).

my %country-stats = get-known-countries<>.race(:1batch,:8degree).map: -> $cc {$cc => generate-country-stats($cc, %CO, :%mortality, :%crude, :$skip-excel)
}

Calling .race on the result of get-known-countries() function improves the previously sequential processing of countries. Indeed, their stats are computed independently, so there’s no reason for one country to wait for another. The parameters of race, the batch size and the number of workers, can probably be tuned to fit your hardware.

The second change is similar, but for another part of the code where the continents are processed in a loop:

for %continents.keys.race(:1batch,:8degree) -> $cont { generate-continent-stats($cont, %CO, :$skip-excel); } Finally, the third change is to make some counters native integers instead of Raku Ints: my int$c = $confirmed[$index] // 0;
my int $f =$failed[$index] // 0; my int$r = $recovered[$index] // 0;
my int $a =$active[$index] // 0; I understand that this reduces both the memory and the processing time of these variables, but for some reason it also eliminated the error in counting function parameters. And finally, I want to mention the <> thing that you may have noticed in the first code change. This is the so-called decontainerization operator. What it does is illustrated by this example from the documentation: use JSON::Tiny; my$config = from-json('{ "files": 3, "path": "/home/some-user/raku.pod6" }');

say $config.raku; # OUTPUT: «${:files(3), :path("/home/some-user/raku.pod6")}»

my %config-hash = $config<>; say %config-hash.raku; # OUTPUT: «{:files(3), :path("/home/some-user/raku.pod6")}» The $config variable is a scalar variable that keeps a hash. To work with it as with a hash, the variable is decontainerized as $config<>. This gives us a proper hash %config-hash. I think that’s it for now. The main advantage of the above changes is that the program now needs less than 25 minutes to re-generate the whole site and it does not fail. Well, but it became a bit louder too as Rakudo uses more cores Thanks, Liz! ## Jo Christian Oterhals: Five books on swimming ### Published by Jo Christian Oterhals on 2020-11-16T20:20:28 ### Five books on open water swimming Recently I have read a lot of books on swimming — which, if you knew me, would seem unexpected. Having a fear of water after a near-drowning accident as a child, I never became a swimmer. Not even a so-so swimmer: I managed to learn what we in Norway call “Grandma swimming”, a sort of laborious and slow breast swimming with the head as high above water as humanly possible and the feet similarly low beneath. But many years later, as an adult and a father, this slowly changed when my oldest son started attending swim practice. Even before taking up swimming as a sport, he had surpassed my abilities by a decent margin. After he became serious about training he almost instantly dwarfed me and my abilities. As parents of swimmers know, being a swim parent involves lots of driving to-and-from and perhaps even more waiting. Sometimes I killed time waiting for him outside the pool area, looking in through the large glass windows that separated spectators —aka annoying parents — from swimmers. From a distance I was amazed by the progress he made month by month. One summer day a year into his training I stood on a lake’s edge watching him swim happily towards the opposite side. When he passed the middle a couple of hundred feet out, I was struck by an uncomfortable thought: If anything happened to him now, I wouldn’t be able to help. And had I tried, I would probably need help myself. In that very moment I decided to something about it. I immediately signed up for a beginner’s swim course for adults. But ten weeks and ten lessons in, hanging from the pool side panting uncontrollably, I was struck by a second thought: The progress I’d seen in the children was impossible to match for us, the slightly overweigh 40+ year old newbies. It would take time and patience to become even a so-so swimmer. And, as it turned out, it would take a lot of patience: A few years have passed and only recently have I started to feel that I master freestyle slightly — if swimming 50 meters freestyle without passing out constitutes mastering, that is. My technique is still laughable, breath continues to be an issue, and I haven’t begun to tumble turn or back stroke yet. So now I know: This takes time. But to boost my motivation I turned to books — like I do every time I start becoming interested in a new subject. These are not instructional or teach-yourself books, but inspirational books about the topic I’m interested in. In this case about swimmers that does unimaginable feats and/or about the history and cultural impact of swimming. As they work as inspiration for me, maybe they will for you too. That’s why I give you quick reviews of five swimming related books I’ve read the last months. They are: Swimming to Antarctica by Lynne Cox, The art of resilience by Ross Edgley, Why we swim by Bonnie Tsui, Open Water by Mikael Rosén, and Grayson, also by Lynne Cox. Lynne Cox: Swimming to Antarctica This is the autobiography of the accomplished open water swimmer Lynne Cox. It starts in the seventies when Lynne’s family moves from the US east coast to California, so that the children can maximise their swim training. It’s here Lynne discovers that she’s a better long distance swimmer than a sprint swimmer and gradually switches to open water swimming. Soon she participates in her first long distance swim —a 20 mile swim from Catalina island to mainland Los Angeles— and discovers that she has the potential for record-breaking pace. It seems like she’s a natural at the discipline. Later she will learn that her body is unique in preserving body heat in cold water conditions. Next up are more feats, such as swimming the English channel. That one earns her an invitation to Cairo to swim the Nile, etc. While all this is happening, she starts to form an idea of becoming the first to swim from the US to the Soviet Union. Lynne sees this as a way to establish bonds and reduce tension between the people of the two countries. Alas, hardly anyone shares her enthusiasm, so large parts of the book covers the quest of getting the necessary permits to cross between two island on the Alaskan and Siberian side of the Bering strait. That process took maybe ten years and is, to me, the book’s heart and soul. Swimming aside, this tenaciousness is a testament of her ability to persevere not only in open water but also the intricacy and bureaucracy that is international politics. As such I think Swimming through the Iron Curtain would be a more fitting title than Swimming to Antarctica. But the book is written chronologically and ends with a swim to the Antarctica, so I guess that’s why they chose the book’s title. A weakness with the book is that it’s unusually light, almost coy, is when it comes to Lynne’s relation to other humans. Her parents, which must have been an important part of her support, is peculiarly described as almost faceless entities in her vicinity. As for romantic relations, she occasionally alludes rather vaguely to how she enjoys the company of a certain individual or how she admires the muscular body of a fellow swimmer, etc. But relations are never described deeper than that, and never more than with a few sentences. That means that this autobiography is unusually auto: Her book is a story about herself and her inner journey powered by external journeys — swims that most people can only dream of. But no matter what the story is called or what weaknesses it may have, it’s a great read about an extraordinary human. That’s why I recommend this book. Ross Edgley: The art of resilience I don’t remember how I stumbled across Ross Edgeley and his “Great British Swim”, but I guess Google’s impenetrable algorithms had something to do with it. Regardless of how — when I did discover him (2018) he had just started his Red Bull sponsored swim around Great Britain, and posted weekly videos about his progress on YouTube. He synchronised his efforts to the tides for the duration of the swim: For 157 days he swam with the currents for six hours and rested the following six hours aboard his support boat. Non stop. For the entirety of the journey he never once set foot on land. When he started the journey the farewell was rather low key, as the turnout consisted of family and friends. When he finished he’d become a household name and was welcomed by hundreds of other open water swimmer as well as large crowds on the beach. And that was well earned, if you ask me: 157 days — initially they thought they use half of that time — and 2884 kilometers (1792 miles) later, he (and his crew) had completed a feat that I think will stay uncontested for a long time. In short, Ross’s story is an exciting one and he writes really well about it. That part of the book is impeccable. Strangely, the weakest parts are where Ross’s background as a sports scientist comes in. He’s eager to share theories about how to train, explain how endurance vs strength works, suggest workouts, etc. Every chapter ends with these science based musings. But they’re not integrated well into the storyline — yes, these too are filled with Ross’s enthusiasm, but all they do for me is punctuate and slow down an otherwise engaging story. What I find peculiar, however, is that if you never watched the youtube videos, in the book it seems as if he got the idea and that everything fell into place by itself. In real life a Red Bull sponsorship was what made the swim possible. They kept him fed and enabled him to keep a boat and a four person crew with him at all times during the 157 days (but he’s gratuitous towards the boat’s captain and attributes much of the success to him). It’s also interesting that this book is the opposite of Lynne Cox’s memoirs in the sense that what Ross is mostly concerned with is the external journey itself. There are some hints of musings about how the swim influenced his personal development, but they are few and far between. In the grand scheme of things, though, these are small flaws. If you manage to fight through the sports science this is a great read! Lynne Cox: Grayson This book covers one very special day in Lynne Cox’s life — a day that weren’t covered in her autobiography. One early morning around daybreak, the seventeen year old Lynne is midway through a solitary open water swim practice. Suddenly she experiences unusual disturbances in the water, only to discover that it’s caused by a baby gray whale. What seems to be a fun encounter quickly turns into a more serious matter: Communicating with an experienced elderly man on shore, she realises that the baby whale has been separated from his mother. If she swims ashore the infant will follow her, strand and die. The story details how Lynne slowly coaches the whale out to deeper water in the hope that they’ll by chance will find his mother. Where Swimming to Antartica was a book as much about Lynne’s inner journeys as her outer, Grayson is even more of an inner journey. The book’s style reflects this. Grayson has a far more lyrical, introspective and even pensive form than her first book. That’s not only positive. As mentioned it covers the events of this one morning only. The only perspective is Lynne’s told in the present tense. To stretch a small story about one morning from one person’s perspective to the necessary 150 pages, a lot of the text is inner monologue. In my opinion that slows the narrative down, and not in a good way. The inner monologues become fillers that doesn’t drive story. What’s worse is that much of the inner monologue doesn’t seem entirely believable. The amount of depth and detail that Lynne allegedly remembers events and thoughts with, is more than anyone — with the possible expection of Marilu Henner — can recall some 30 odd years later. In addition many of the thoughts and reflections the 17 year old Lynne supposedly has, are astonishingly mature and filled with knowledge she possibly couldn’t have had at the moment . Scientific facts about gray whales, for instance. These are the thoughts and retrospections of a person in their late forties. There’s nothing wrong with thoughts and retrospections from late forty-somethings — after all I’m one myself. And had they been presented as such, as present day reflections of that extraordinary morning in her teens, this would probably not feel alien to the story at all. But the the choice to attribute the thoughts of a soon-to-be 50 year old to a 17-year old ends — to me —up as a significant stylistic crash. With that in mind, I can’t help but think that if her editor had cut most of this, they’d end up with a tight and great story driven book for adolescents/young adults. As it is now, it’s not. But if you’re a less critical reader than me you’ll get a reasonably engaging book about the inner and outer journey of an almost superhuman swimmer. Should you only want to read only one book by Lynne Cox, however, Swimming to Antartica is the better choice. Mikael Rosén: Open Water Swedish author Mikael Rosén’s Open Water is not only about swimming itself, but also about swimming’s history, technique, science, cultural implications, racial issues, and more. Although you’d imagine that a mashup of all that would end up… well… mashy, the book is surprisingly clear and interesting despite juggling many sprawling subject. As such the book really delivers on its subtitle The History and Technique of Swimming. Although this book talks about specific swimmers such as the pre-WW2 olympian Johnny Weissmuller, the first man to swim across the English channel, captain Matthew Webb, or modern athletes like Michael Phelphs, this is really not a book about individuals. These people are used to illustrate topics such as improvement in sports science (Weissmuller vs the Japanese swimmers that followed) or the history of open water swimming (captain Webb). Consistently interesting throughout, the most interesting part may be the second of the total eight chapter. That section explores prejudice — how female swimmers started to appear on the scene and suddenly break records previously held by men, or how a white, racist population’s negative reaction to black swimmers at public pools contributed to the establishment and strengthening of segregation laws in the southern states of the USA. This tour de force of interesting and surprising facts reads a little like if Bill Bryson had written a book about swimming, though less humorous. But still, it’s almost on that level. Most books are not perfect, however, and this book is no exception: Written three years before Ross Edgeley’s The Art of Resilience, it shares the latter’s insistence of closing each chapter with a little sports science, training programs, suggestions of drills, etc. They don’t bother me as much in this book — as opposed to Ross’s — as this book is not a chronological story driven narrative. Therefore the training parts fits a little better into the whole. But the book wouldn’t suffer if they’d been edited out. All in all that’s minor critisism, so I recommend this book wholeheartedly. Bonnie Tsui: Why we swim It’s not a coincidence that this book comes last. The reason is that this book is best described in the context of the previously mentioned books. Why we swim is in a way an amalgamation of the science/history aspects of Mikael Rosén’s Open Water and the introspection of Lynne Cox’s books. But where the latter describes her personal growth in retrospect, Bonnie Tsui documents her quest for personal growth through swimming more or less as it happens — as a part of the process of writing the book itself it seems. Bonnie Tsui kicks her book off with the story of Guðlaugur Friðþórsson, a fisherman that was the sole survivor after a fishing vessel sank in the frigid winter waters off Iceland. Together with two mates he started to swim towards land, but not long after he was the only one still swimming. Against all odds Friðþórsson survived a six hour swim in six degree celcius water. For Tsui this becomes the entry point to the history of swimming. Her book is structured around five main topics, going from Survival, Well-Being, Community, Competition and ultimately to the more metaphysical and meditative subject of Flow. She takes on a tour of swimming history, starting in the stone age and the first documentation we have of humans swimming, ending with personal musings about not why we swim, but why she swims. And this inclusion of a very visible I throughout the book — the chapter about Friðþórsson is not only about Friðþórsson but also about her meeting him and her participation in a swim honoring him—means that you can’t separate her personal journey from her exploration of the history, culture and science of swimming. Granted, Open Water is more hard core when it comes to facts, but the unique interspersion of the author personal story and the overarching topics of the book, makes this the most beautifully written of the five books I’ve mentioned here. Read it! So… do you become a better swimmer by reading? Of course not. Only practice can improve swimming (although you may pick up some valuable hints to how you can improve through pure instructional books, such as Terry Laughlin’s Total Immersion). But this being November 2020, the year of Covid-19, all swimming pools are closed and I’m unable to practice and improve for a while. This may be the case for you too. But while you wait for the pools to open again or the summer to heat up the sea to a more welcoming temperature, spending some time on one or more of these books wouldn’t be the worst thing to do. Who knows? Maybe you’ll come back to the water more inspired than before. ## 6guts: Taking a break from Raku core development ### Published by jnthnwrthngtn on 2020-10-05T19:44:26 I’d like to thank everyone who voted for me in the recent Raku Steering Council elections. By this point, I’ve been working on the language for well over a decade, first to help turn a language design I found fascinating into a working implementation, and since the Christmas release to make that implementation more robust and performant. Overall, it’s been as fun as it has been challenging – in a large part because I’ve found myself sharing the journey with a lot of really great people. I’ve also tried to do my bit to keep the community around the language kind and considerate. Receiving a vote from around 90% of those who participated in the Steering Council elections was humbling. Alas, I’ve today submitted my resignation to the Steering Council, on personal health grounds. For the same reason, I’ll be taking a step back from Raku core development (Raku, MoarVM, language design, etc.) Please don’t worry too much; I’ll almost certainly be fine. It may be I’m ready to continue working on Raku things in a month or two. It may also be longer. Either way, I think Raku will be better off with a fully sized Steering Council in place, and I’ll be better off without the anxiety that I’m holding a role that I’m not in a place to fulfill. ## rakudo.org: Rakudo Star Release 2020.01 ### Published on 2020-02-24T00:00:00 ## Death by Perl6: zef ++ ecosystem ### Published by Tony O'Dell on 2020-01-21T18:47:24 TLDR; making the entire git ecosystem available to zef is in testing and should be available soon. Module author tools are in early alpha phase and, if you'd like to contribute, please contact @tony-o on freenode. The current state of the raku ecosystem is not great. It can be better. Currently, most of the modules are hosted on CPAN or in a git repo with a centralized list of those modules in yet another repo. CPAN has shortcomings with raku because it was designed for perl and the module specs between the two languages, though related, are dissimilar. The github solution is a bit of hack. It works but it's tedious and requires a lot of trust to allow module authors just add things to the ecosystem via a repo that can affect so many others. Enter: zeco. For authors; module authors should be able to upload their modules without interfering with others' modules and they should be able to do it without remember what repo they need to go update or going to a website and uploading a tar file. Let's make this easy. With the zeco system there are two ways to publish your modules. You can upload a tar file using a zef plugin (think: zef publish or similar, this is preferred) and the other method is to publish your module using a git repo link - this will not be mirrored by zeco (at least not immediately). For general consumption; zeco provides endpoints for searching and downloading different modules which will help zef be efficient with your time. Current status of zeco: The server is up and currently makes several endpoints available to both authors and consumers of modules. Transferring the git repo META list is in testing and, once testing is complete, will provide a full ecosystem list to zef with info on how to obtain the module. There exists a zef plugin to register/upload modules. If you're interested in helping or testing this tool so we can make it ready for production then please contact tony-o in any of the #raku channels. ## Death by Perl6: Rakudo Nightly & Faster CI ### Published by Tony O'Dell on 2019-09-25T16:22:00 This article is going to walk through using a docker image to CI little modules on Circle CI and Travis CI - tl;dr just copy the appropriate config files for your choice of CI and modify for system dependencies. The major benefit of using the image is speed and an ancillary benefit is the module can be tested against the latest rakudo build plus regression tested against whatever tags you find in the tonyodell/rakudo-nightly repo. ## Docker In the event that you're unfamiliar with docker: docker falls somewhere between a jail and virtual machine. You don't really need to know much about docker to follow along and the options used to make certain things work a certain way will be explained the situations below. For the rest of the tutorial we'll use this repo for rakudo nightly. ## Travis CI language: minimal services: docker before_install: - docker pull tonyodell/rakudo-nightly:latest script: - docker run -v$(pwd):/tmp/test:rw
-w /tmp/test
tonyodell/rakudo-nightly:latest
bash -c 'apt update; apt install -y ca-certificates; zef install -v --deps-only . && zef test .'


### language: minimal

Usually in travis-ci we'd use language: perl6 as it's been available for quite some time.  Because we're using a prefab image and don't want to build rakudo on every CI cycle, we don't need much on the host so we'll stick with the minimal image.

### services: docker

Let Travis know we want to use docker on this run.

### before_install: [ docker pull .. ]

Pull down a recent image of rakudo-nightly so we can test inside of the container later.  If you need a specific version of rakudo-nightly then this is the first of two places to change the tag.  Multiple versions can be pulled here; perhaps testing against a nightly and a known supported release.

### script: [ docker run .. ]

... -v $(pwd):/tmp/test:rw ... This will mount the current directory as /tmp/test read/writeable to the container. The $(pwd) works in this case as Travis takes care of putting us into the repository's checkout directory before we get to this point.

... -w /tmp/test ...

Sets the working directory for whatever it is we're about to do, in this case we're doing this here so we don't need to cd anywhere or use long paths once we start testing.

... tonyodell/rakudo-nightly:latest bash -c ...

This is the second place to change the tag if you want to use a specific nightly for testing.  Check out the repo under the Docker heading above for a list of available tags.

The rest of this is fairly straightforward.  Update apt, install the ca-certificates package so curl can store the ca certs while updating zef mirrors (if you need system level dependencies like libcsv or whatever else then this is the place to install them), install the dependencies in the META6.json for the module, and then test the module.

It's as easy as that, now your Travis test builds should take far less time.  The Bench module ran CI in a little over a minute compared to around five and a half minutes.  It ran in 50s when fully optimized for that module.

## Circle CI

version: 2
jobs:
build:
docker:
- image: tonyodell/rakudo-nightly:latest

working_directory: ~

steps:
- checkout
- run:
name: install build deps
command: |
apt install -y libsqlite3-dev
zef install --deps-only .
- run:
name: test
command: |
zef test .

Circle CI is a little more straight forward because it runs the checkout and commands directly inside of your container.  The example above is letting circle know that you'd like to use the rakudo-nightly to test, installing dependencies using apt (because this image is based upon Ubuntu), installing dependencies, and then finally testing your module.

If you have questions, find anything broken, or would like more topics covered around testing modules with travis or circle (or ??) with docker, feel free to hit me up in freenode#perl6 @tony-o

## Jo Christian Oterhals: Perl 6 small stuff #21: it’s a date! …or: learn from an overly complex solution to a simple task

### Perl 6 small stuff #21: it’s a date! …or: learn from an overly complex solution to a simple task

This week’s Perl Weekly Challenge (#19) has two tasks. The first is to find all months with five weekends in the years from 1900 through 2019. The second is to program an implementation of word wrap using the greedy algorithm.

Both are pretty straight-forward tasks, and the solutions to them can (and should) be as well. This time, however, I’m also going to do the opposite and incrementally turn the easy solution into an unnecessarily complex one. Because in this particular case we can learn more by doing things the unnecessarily hard way. So this post will take a look at Dates and date manipulation in Perl 6, using PWC #19 task 1 as an example:

Write a script to display months from the year 1900 to 2019 where you find 5 weekends i.e. 5 Friday, 5 Saturday and 5 Sunday.

Let’s start by finding five-weekend months the easy way:

#!/usr/bin/env perl6
say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1 }, do 1900..2019 X 1,3,5,7,8,10,12; The algorithm for figuring this out is simple. Given the prerequisite that there must be five occurrences of not only Saturday and Sunday but also Friday, you A) *must* have 31 days to cram five weekends into. And when you know that you’ll also see that B) the last day of the month MUST be a Sunday and C) the first day of the month MUST be a Friday (you don’t have to check for both; if A is true and B is true, C is automatically true too). The code above implements B and employs a few tricks. You read it from right to left (unless you write it from left to right, like this… say do 1900..2019 X 1,3,5,7,8,10,12 ==> map { Date.new: |$_, 1 } ==> grep *.day-of-week == 5 ==> join “\n”; )

Using the X operator I create a cross product of all the years in the range 1900–2019 and the months 1, 3, 5, 7, 8, 10, 12 (31-day months). In return I get a sequence containing all year-month pairs of the period.

The map function iterates through the Seq. There it instantiates a Date object. A little song and dance is necessary: As Date.new takes three unnamed integer parameters, year, month and day, I have to do something to what I have — a Pair with year and month. I therefore use the | operator to “explode” the pair into two integer parameters for year and month.

You can always use this for calling a sub routine with fixed parameters, using an array with parameter values rather than having separate variables for each parameter. The code below exemplifies usage:

my @list = 1, 2, 3;
sub explode-parameters($one,$two, $three) { …do something… } #traditional call explode-parameters(@list[0], @list[1], @list[2]);  # …or using | explode-parameters(|@list); Back to the business at hand — the .grep filters out the months where the 1st is a Friday, and those are our 5 weekend months. So the output of the one-liner above looks something like this: ...1997-08-011998-05-011999-01-01... This is a solution as good as any, and if a solution was all we wanted, we could have stopped here. But using this task as an example I want to explore ways to utilise the Date class. Example: The one-liner above does the job, but strictly speaking it doesn’t output the months but the first day of those months. Correcting this is easy, because the Date class supports something called formatters and use the sprintf syntax. To do this you utilise the named parameter “formatter” when instantiating the object. say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1, formatter => { sprintf "%04d/%02d", .year, .month } }, do 1900..2019 X 1,3,5,7,8,10,12;

Every time a routine pulls a stringified version of the date, the formatter object is invoked. In our case the output has been changed to…

...1997/081998/051999/01...

Formatters are powerful. Look into them.

Now to the overly complex solution. This is the unthinking programmer’s solution, as we don’t suppose anything. The program isn’t told that 5 weekend months only can occur on 31 day months. It doesn’t know that the 1st of such months must be a Friday. All it knows is that if the last day of the month is not Sunday, it figures out the date of the last Sunday (this is not very relevant when counting three-day weekends, but could be if you want to find Saturday+Sunday weekends, or only Sundays).

#!/usr/bin/env perl6
my $format-it = sub ($self) {  sprintf "%04d month %02d", .year, .month given $self;} sub MAIN(Int :$from-year = 1900, Int :$to-year where * >$from-year = 2019, Int :$weekend-length where * ~~ 1..3 = 3) { my$date-loop = Date.new($from-year, 1, 1, formatter =>$format-it);  while ($date-loop.year <=$to-year) {    my $date =$date-loop.later(day => $date-loop.days-in-month);$date = $date.truncated-to('week').pred if$date.day-of-week != 7;    my @weekend = do for 0..^$weekend-length ->$w {       $date.earlier(day =>$w).weekday-of-month;     };    say $date-loop if ([+] @weekend) / @weekend == 5;$date-loop = $date-loop.later(:1month); }} This code can solve the task both for three day weekends, but also for weekends consisting of Saturday + Sunday, as well as only Sundays. You control that with the command line parameter weekend-length=[1..3]. This code finds the last Sunday of each month and counts whether it has occured five times that month. It does the same for Saturday (if weekend-length=2) and Friday (if weekend-length=3). Like this: my @weekend = do for 0..^$weekend-length -> $w {$date.earlier(day => $w).weekday-of-month; }; The code then calculcates the average weekday-of-month for these three days like this: say$date-loop if ([+] @weekend) / @weekend == 5;

This line uses the reduction operator [+] on the @weekend list to find the sum of all elements. That sum is divided by the number of elements. If the result is 5, then you have a five day weekend.

As for fun stuff to do with the Date object:

.later(day|month|year => Int) — adds the given number of time units to the current date. There’s also an earlier method for subtracting.

.days-in-months — tells you how many days there are in the current month of the Date object. The value may be 31, 30, 29 (february, leap year) or 28 (february).

.truncated-to(week|month|day|year) — rolls the date back to the first day of the week, month, day or year.

.weekday-of-month — figures out what day of week the current date is and calculates how many of that day there has been so far in that month.

Apart from this you’ll see that I added the formatter in a different way this time. This is probably cleaner looking and easier to maintain.

In the end this post maybe isn’t about dates and date manipulation at all, but rather is a call for all of us to use the documentation even more. It’s often I think that Perl 6 should have a function for x, y or z — .weekday-of-month is one such example — and the documentation tells me that it actually does!

It’s very easy to pick up Perl 6 and program it as you would have programmed Perl 5 or other languages you know well. But the documentation has lots of info of things you didn’t have before and that will make programming easier and more fun when you’ve learnt about them.

I guess you don’t need and excuse to delve into the docs, but if you do the Perl Weekly Challenge is an excellent excuse for spending time in the docs!

## Death by Perl6: The 2019 Perl Toolchain Summit

The 2019 Perl Toolchain Summit has come to a close. This annual gathering brings toolchain developers of both Perl and Perl6 together to work on the types of problems that are much easier with in-person collaboration. Attendees arrive with a list of things to work on, and depart with an even bigger list -- this year was no different for me.

There was one pressing zef issue that needed to be resolved: how to gracefully handle infrastructure failures that hosted the module indexes. Basically the server that was listed as the first mirror for a certain ecosystem died. zef was capable of eventually falling back to a different mirror, but the processes was slow. I used a two part solution: change the primary mirror to be an index hosted on github.com, and then add a cli api for disabling/enable auto update for any/all repository plugins. This was a good opportunity to make use of a little perl6 argument handling trick --/foo=1 which is 1 but False:

# Update all module indexes
--update

# Do not update any module indexes
--/update

# Only update the cpan module index
--update=cpan

# Do not update the cpan module index
--/update=cpan



The rest of the PTS was primarily spent tacking a much more difficult issue: having rakudo precompile installed scripts. Currently if you want to improve the startup speed of a script you have to put the code into a module and then use Module::Script inside the actual script file. Because each installed script also gets a wrapper script installed (that launches the actual script) we don't have to worry about the issue of providing a perl6 entry point. The proof of concept was pretty simple to implement: add the appropriate short-name lookup data (sha1s!) to the file system, and then tweak the way the wrapper script launches its matching script to work something like use bin/my-script.p6. This allowed all the machinery for loading module source vs bytecode + auto precompilation to Just Work (for lack of a better phrase) like it does for modules. This remains an unfinished venture though -- precompilation of basic scripts works, but precompilation of &MAIN has proven to be particularly difficult. jnthn and nine have provided some insights, but there is much more thinking to be done.

It should go without saying that nine and lizmat got to be at the receiving end of a few of my s22 brain dumps. Over the years it has taken a lot of energy to digest s22 to the point where I understand the remaining implementation possibilities. I haven't yet had the time/patience for explaining most of these things online -- the questions usually involve a lack of foundational understanding such that any explanation turns into the equivalent of yak shaving. So I'm always glad when PTS comes around and I get a chance to clarify, in person, how these things should work.

Last but not least I got to have a wide range of conversations that involved Perl6:

• lizmat and skaji discussed what, if any, best practices should be used when generating distribution data with e.g. App::Mi6. In particular was the name, and how that should map to the file name used for the generated tar file.

• Some Windows expertise in the form of Mithaldu provided convincing arguments against abusing the DOS subsystem as a valid strategy to avoid certain file naming issues.

• Ingy informed us he was implementing tab completion on all shells for perl, cpanm, perl6, zef.

• charsbar talked to me a little but about PAUSE, and offered his support to improve the CPAN ecosystem for Perl6 users. I'll need to further reflect how to merge the Perl6 concept of auth with PAUSE namespace ownership, but this should prove a promising avenue for a future PTS.

The sponsors and organizers for the Perl Toolchain Summit 2019 help to get the right people in the right place at the right time. Without them I'd just be yelling at people in IRC.