# pl6anet

Steve Mynott (Freenode: stmuk) steve.mynott (at)gmail.com / 2019-06-26T12:11:13

## Perl 6 Inside Out: 💡 104. Stooge sort in Perl 6

Have you ever heard of the Stooge sort algorithm? If not, it is quite interesting to learn it. The idea is clear, while you maybe need some time to see if it really works.

So, take a list of numbers and swap the first and the last elements if they are not sorted properly (that is, if the first element is bigger than the last one).

Then take the first two thirds of the data array and apply the procedure to those elements. When you are back, apply the procedure to the the right two thirds of it (thus, the two thirds but tied to the right edge). And after that, ‘confirm’ the result by stooge-sorting the first two thirds.

Repeat until everything is sorted.

This time, having some experience implementing the previous sorting algorithms, the final code is ready after the first iteration.

sub stooge-sort(@data) {
return if [<=] @data;

@data[0, *-1].=reverse if [>] @data[0, *-1];

my $l = @data[0 ..^ Int(2/3 * ^@data)]; my$r = @data[Int(1/3 * ^@data) .. *-1];

stooge-sort($l); stooge-sort($r);
stooge-sort($l); } my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10; stooge-sort @data; say @data; There are a few places where you can improve it a bit. The main being the check [<=] at the function entrance, which slows the whole algorithms down. One of the ideas to stop the iteration is to check if you are at the top level of recursion. Although, I would like you to take a closer look at the fact that the function sorts data in-place, and you don’t have to pass array boundaries as function parameters. The $l and $r variables keep the lists which refer to array slices. They are passed to another iteration level of stooge-sort, and the function changes original data. This is quite useful technique when you can pass a reference to a part of the array and modify it independently. You can find the source file of today’s post on GitHub, feel free to share your thoughts and ideas! See you tomorrow! ## Perl 6 Inside Out: 💡 103. Merge sort in Perl 6 ### Published by andrewshitov on 2019-06-25T07:00:48 Welcome to another sorting episode, this time we’ll talk about Merge sort in Perl 6. In Merge sort, you first split the data into halves until the pieces become atomic (in the original meaning of the word), that is either each piece contains a single element, or, after the current split, the second part contains no elements. The second step is two merge pairs of pieces. If the pieces are trivial, you simply attach one element to another in the correct order and get a one- or two-item sublist. As the fragments grow, you have to take the two pieces and create a new list by taking the values from the beginning of each piece so that the final sequence is sorted (numeric or lexicographic, but we only talk about numbers so far). The first ad-hoc implementation is shown below. sub merge-sort(@data) { return @data if @data.elems <= 1; sub merge(@l, @r) { my @a; while (@l and @r) { my$v;
if @l[0] < @r[0] {
$v = @l.shift; } else {$v = @r.shift;
}
@a.push($v); } @a.push(|@l, |@r); return @a; } my$mid = @data.elems div 2;
my @l = @data[^$mid]; my @r = @data[$mid .. *-1];

@l = merge-sort(@l);
@r = merge-sort(@r);

my @a = merge(@l, @r);

return @a;
}

my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10;
@data = merge-sort @data;
say @data;

Notice that the data is not sorted in-place. Also notice that the merge procedure is contained in a separate internal function.

The algorithm calls itself iteratively: for each @l and @r parts another round of merge-sort is initiated before the parts are merged via merge(@l, @r).

Let us beautify the code as far as it is possible. First, let’s get rid of a separate storage for the halves and move them straight to the function call:

my @l = merge-sort(@data[^$mid]); my @r = merge-sort(@data[$mid .. *-1]);

The body of the merge function contains a lot of one-line fragments of code, which can be efficiently re-written using a ternary operator.

while (@l and @r) {
my $v = @l[0] < @r[0] ?? @l.shift !! @r.shift; @a.push($v);
}

Now, the merge function seems to be redundant, and its actions can be moved to the place where the function is called. At this step, we can also gain some space from inlining temporary assignments and using postfix structures.

sub merge-sort(@data) {
return @data if @data.elems <= 1;

my $mid = @data.elems div 2; my @l = merge-sort(@data[^$mid]);
my @r = merge-sort(@data[$mid .. *-1]); my @a; @a.push(@l[0] < @r[0] ?? @l.shift !! @r.shift) while @l and @r; @a.push(|@l, |@r); return @a; } my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10; @data = merge-sort @data; say @data; Maybe it is worth give comments about the two pushes. This code gets two arrays, which are already sorted (as they are the result of the merge-sort call). Each time, you look at the first elements in them, and pick the one which is smaller. The first push adds elements to the result while both @l and @r parts contain elements. After at least one of them is exhausted, the second push adds the remaining element to the result. The remaining item is always not less than the last taken, as the arrays were already sorted. While the call stack has been collapsing, the array gets sorted. The final touch can be done to remove the temporary @a variable (and—how sad it can be to invent the wheel—make the code look alike its Rosettacode’s counter partner). In Perl 6, there is a pair of routines: take and gather, that collect data while you generate it, and then return the whole at once. Here is the modified version of the function: sub merge-sort(@data) { return @data if @data.elems <= 1; my$mid = @data.elems div 2;

my @l = merge-sort(@data[^$mid]); my @r = merge-sort(@data[$mid .. *-1]);

return flat(gather {
take(@l[0] < @r[0] ?? @l.shift !! @r.shift) while @l and @r;
take(@l, @r);
});
}

Also notice that flat before return allows you not to flatten the data in the second take.

You are welcome to further play with the code, which is available on GitHub, and merge interesting solutions into it.

## Weekly changes in and around Perl 6: 2019.25 On Toolsets

Damian Conway came out with another blog post about Perl 6 in the past week: Coding with an even fuller toolset, as a follow up on last week’s blog post about tooling. (/r/perl6 comments). It also started a thread on /r/perl. Some excellent reading material again. And there’s a lot more of that this week! But first some multimedia:

## TPCiP Videos

All videos of The Perl Conference in Pittsburgh have been published. These presentations provide Perl 6 content:

Next year’s American Perl Conference will be in Houston!

## Hello World!

Madeleine Goebel tells us that she got the Hello World proof-of-concept running in her GSOC project to create a single executable of a Perl 6 app! Exciting times!

## Daily Posts Again

Andrew Shitov has picked up on his streak of daily blog posts again. In the past days he’s published:

Looking forward to many of these in the future!

## GSoC progress

Antonio Gamiz not only blogged once about his project in the past week, he blogged twice!

## Logic Programming

Jeff Goff not only did an excellent Perl 6 workshop in Pittsburgh, he also got inspired to write a blog post about logic programming, co-inspired by Picat (Facebook comments).

## Summer slurpies, 3 for a $Matthew ‘Matéu’ Stephen Stuckwisch also wrote an excellent blog post about Variadic functions in Perl 6, or in other words: how and when to use which slurpies (or not). Recommended reading! ## Perl Weekly Challenge Blog posts with Perl 6 solutions for the Perl Weekly Challenge #13: Challenge #14 is up for your perusal! ## Core developments • Ticket status of the past week. • Timo Paulssen continued his work on confprog, the configurable profiler. He also fixed an issue with Date/DateTime objects with specific formatters. • Ben Davies fixed a memory leak when reading bytes from a socket. • Jan-Olof Hendig fixed a problem recently introduced on 32-bit systems. • Paweł Murias fixed an issue with wrapped Javascript functions (on the Javascript backend). And he added basic support for use Foo:from<node.js>; which should unlock all of node libraries to Perl 6 on the Javascript backend! • Elizabeth Mattijsen fixed an issue with List.reverse and with the use of Junctions in grep. • Vadim Belman fixed a type checking issue with a role inheriting from a class. He also fixed a problem with additional containers in exported values. • And some more improvements and fixes in anticipation of the 2019.06 release. ## Questions about Perl 6 Finally at the 1111 mark in StackOverflow! ## Meanwhile on Twitter ## Meanwhile on perl6-users ## Perl 6 in comments ## Perl 6 Modules New modules: Updated modules: ## Winding Down While suffering from what will be this years first heatwave at yours trulies, it is good to see so many blog posts about Perl 6. While the final blockers for a 2019.06 Rakudo compiler release are being taken care of. See you again next week, hopefully a bit less hot! ## Perl 6 Inside Out: 💡 102. Insertion sort in Perl 6 ### Published by andrewshitov on 2019-06-24T07:25:02 Today, we are investigating the Insertion sort algorithm and its possible implementation in Perl 6. The algorithm’s complexity is O(n2), but it is a good candidate to practice some Perl 6. The idea is simple. You find the minimum value in the array and put it to the first position. Then you scan the data starting from the second position (as the first position is already occupied with the lowest element). And you go on to the right, finding minimums and placing them to the current position until you reach the end. It is similar to Selection sort, but instead of swapping the two elements, you insert one (and thus, shift the others). Let us start with the straightforward approach and two nested loops: sub insertion-sort(@data) { for ^@data ->$i {
for ^$i ->$j {
if @data[$i] < @data[$j] {
@data.splice($j, 0, @data[$i]);
@data.splice($i + 1, 1); } } } } my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10; insertion-sort @data; say @data;  In Perl 6, the splice method of arrays can serve two tasks: replace the part of the array with another list of elements or simply remove the element or a few elements in a row. In the above code, both applications of the method are used. First, the new found element is inserted to the current position. Second, it is removed from its previous place (the array just grew, so the index became $i + 1).

As the splice method also returns the removed element(s), we can put the second call to the place where the element is being read: @data[$i]. And thus the two lines can be replaced with the following nested calls: @data.splice($j, 0, @data.splice($i, 1)) Notice that the index is simply $i now as the array is not yet modified.

You should be already familiar with the second possible trick: let’s use the postfix if:

sub insertion-sort(@data) {
for ^@data -> $i { for ^$i -> $j { @data.splice($j, 0, @data.splice($i, 1)) if @data[$i] < @data[$j]; } } } You can stop at this point, but I hope you are not yet satisfied. At least, the two nested fors seem to be a good field for further thinking. Unfortunately, it is not possible to directly use a cross operator to have something like for ^@data X ^@data, as the second list depends on the first one, but there is a completely different way to simplify the code. The primary goal of the most inner for loop is to find the first minimum element in the array. Perl 6 gives us the first method, which does exactly that. By default, it returns the element, but we need its index. You do that by adding the :k named parameter: @data.first(* >= @data[$_], :k)

A bare :k is equivalent to setting the parameter to True: :k(True) or k => True.

sub insertion-sort(@data) {
for ^@data -> $i { @data.splice( @data.first(* >= @data[$i], :k),
0,
@data.splice($i, 1) ) } } Finally, make the only remaining for loop a postfix clause, and you are done with a nice one-line function (shown here split into shorter parts on different lines): sub insertion-sort(@data) { @data.splice( @data.first(* >= @data[$_], :k),
0,
@data.splice($_, 1) ) for ^@data; } my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10; insertion-sort @data; say @data; That’s all for now, but if you find something that can be improved, please let us know in the comments below. The source codes are available on GitHub. ## Perl 6 Inside Out: 💡 101. Quick sort in Perl 6 ### Published by andrewshitov on 2019-06-23T07:00:40 Today, let’s look at another, and presumably the most well known method of sorting data, Quick sort. The algorithm requires you to select the so-called pivot, one of the element from the data, and split the rest in two parts: the elements less than the pivot, and the elements more or equals to the pivot. Each part is then recursively sorted again. On each iteration, the parts become smaller and smaller until the sublists are trivial data sets of one or even zero elements. A separate theory is how to choose the right pivot. There are a few methods, for example, taking a value from the middle of the list, or taking the first item, or the last, or a random item. There are also more complicated methods, and you can tune it to achieve the best performance on your type of data sets. For simplicity, let’s choose the first element as a pivot, and here is the code: sub quick-sort(@data) { return @data if @data.elems <= 1; my$pivot = @data[0];

my @left;
my @right;

for @data[1..*] -> $x { if$x < $pivot { push @left,$x;
}
else {
push @right, $x; } } return flat(quick-sort(@left),$pivot, quick-sort(@right));
}

my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10;
@data = quick-sort @data;
say @data;

Unlike the previous examples of Bubble sort, this program does not sort in-place but returns a new list instead.

Now it is time to transform the code to make it more Perl 6-ish.

Multi-subs come for the rescue again, which while increasing the number of lines of code, make the whole algorithm easier to catch at a first glance.

multi sub quick-sort(@data where @data.elems <= 1) {
return @data;
}

multi sub quick-sort(@data where @data.elems > 1) {
my $pivot = @data[0]; . . . } Now, take a look at these two pieces: my$pivot = @data[0];

. . .

for @data[1..*] -> $x { The first element needs to be taken, and the rest of the algorithm only deals with the rest of the list. Currently, this is achieved by taking an element and a slice, but there’s a shift method that does exactly what we need, and removes the element from the data. So, let’s use it: my$pivot = @data.shift;

. . .

for @data -> $x { The next comes the ifelse selection, which can be effectively (although maybe a bit less efficiently) be replaced with the two greps: one selecting the part prior to the pivot, another selecting the rest. my @left = @data.grep(* <$pivot);
my @right = @data.grep(* >= $pivot); Basically, that’s it. What you can also do is to get rid of temporary variables and put the filters to the return statement: return flat( quick-sort(@data.grep(* <$pivot)),
$pivot, quick-sort(@data.grep(* >=$pivot))
);

It all worked before this last change, but now it is broken:

$perl6 quick-sort.pl Cannot call 'pop' on an immutable 'List' in sub quick-sort at 3.pl line 6 in sub quick-sort at 3.pl line 8 in block <unit> at 3.pl line 12 The problem is that you need to return a single list of numbers, but each subcall of quick-sort returns its own lists. You can easily address the issue by slurping the elements by putting a * before the function argument: multi sub quick-sort(*@data where @data.elems > 1) { . . . The final code looks like this: multi sub quick-sort(*@data where @data.elems <= 1) { return @data; } multi sub quick-sort(*@data where @data.elems > 1) { my$pivot = @data.shift;

return flat(
quick-sort(@data.grep(* < $pivot)),$pivot,
quick-sort(@data.grep(* >= $pivot)) ); } Instead of using the flat routine, you can also flat each list independently using the |: return |quick-sort(@data.grep(* <$pivot)),
$pivot, |quick-sort(@data.grep(* >=$pivot));

But I think it is still better to have a couple of understandable intermediate variables to avoid punctuation noise:

multi sub quick-sort(@data where @data.elems <= 1) {
return @data;
}

multi sub quick-sort(@data where @data.elems > 1) {
my $pivot = @data.shift; my @left = @data.grep(* <$pivot);
my @right = @data.grep(* >= $pivot); return flat(quick-sort(@left),$pivot, quick-sort(@right));
}

my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10;
@data = quick-sort @data;
say @data;

As a home work, create an implementation of the Quick sort algorithm that sorts the data in-place.

The code is available on GitHub, you are welcome to join this journey too!

And let me update the post with based on some comments on Reddit.

First, instead of two greps, it was nice to use some function that does the separation in one go (as it was in the original program with if and else). Actually, Perl 6 has one. It is called classify, and it returns a hash, where the keys are all the possible values of the condition. In our case, it will be True (when the element is less than the pivot), and False otherwise.

multi sub quick-sort(@data where @data.elems > 1) {
my $pivot = @data.shift; my %part = @data.classify(* <$pivot);

return flat(
quick-sort(%part{True} // []),
$pivot, quick-sort(%part{False} // []) ); } As it may happen that all elements are on the same side of the pivot, you need to make sure you pass an empty list in that case (thus, // []). The second change will lead to the the code similar to what is published (as of today) on Rosettacode. Indeed, if our choice of the pivot is the first element, than it is possible to separate it in the function signature instead of manually taking the item in the code. You can also simplify the signature of the first multi-method: multi sub quick-sort([]) { return []; } multi sub quick-sort([$pivot, *@data]) {
my %part = @data.classify(* < $pivot); return flat( quick-sort(%part{True} // []),$pivot,
quick-sort(%part{False} // [])
);
}

Rosettacode’s code also does not have a return statement. You can do that too, while I prefer having an explicit return in the function.

Can you suggest any further clever change?

## Perl 6 Inside Out: 💡 100. Bubble sort in Perl 6

Hey everyone, let’s implement some algorithms in Perl 6.

The first one will be the classical Bubble sort. In essence, you scan the data from left to right and swap the two adjacent items if they are not ordered properly yet. Repeat until the whole data list is ordered.

Here’s the initial straightforward implementation:

sub bubble-sort(@data) {
my Bool $done = False; while !$done {
$done = True; for 1 .. @data.elems - 1 ->$n {
if @data[$n - 1] > @data[$n] {
(@data[$n - 1], @data[$n]) =
(@data[$n], @data[$n - 1]);
$done = False; } } } } my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10; bubble-sort @data; say @data; This implementation does in-place sorting, but you don’t need to declare the function argument as is rw. Actually, the compiler will tell you that that is redundant and will stop further compilation: For parameter '@data', '@' sigil containers don't need 'is rw' to be writable Can only use 'is rw' on a scalar ('$' sigil) parameter, not '@data'

Another place which I’d like to take a more precise look at is swapping the two array elements:

(@data[$n - 1], @data[$n]) = (@data[$n], @data[$n - 1]);

Now, as the program is ready and we can confirm it works correctly, let’s transform it to make it more expressive (although maybe a bit less efficient).

$perl6 bubble-sort.pl6 [1 1 2 2 4 5 7 9 10 46 78] The first step would be to simplify the swapping code. On both sides of the equals sign, we are working with the same two elements, with indices $n and $n - 1. It is possible to call the reverse method on the list of two elements and assign it back to it: (@data[$n - 1], @data[$n]).=reverse; This can be simplified further by using an array slice: @data[$n - 1, $n].=reverse; As you can see, the parentheses are not needed any more. The only question, which you have to answer yourself, is whether to surround the .= operator with spaces or not. On one hand, this is a method call (thus, no spaces), on another, it is an assignment (thus, spaces). OK, what’s next? Of course, the if statement can be written as a postfix condition, but there are two lines of code in the if block. At this point, I can make a decision to trade between the beauty of code and the efficiency, and get rid of the Boolean $done flag.

sub bubble-sort(@data) {
for 1 .. @data.elems - 1 -> $n { if @data[$n - 1] > @data[$n] { @data[$n - 1, $n].=reverse; } } bubble-sort(@data) unless [<=] @data; } This step also allowed to remove the while block. At the end of the function, a recursive call is done (again, maybe less efficient, but who cares), and the condition to check if the array is already sorted is now explicit: unless [<=] @data After all that, it is now possible to append a postfix if: @data[$n - 1, $n].=reverse if @data[$n - 1] > @data[$n] And even the postfix for: @data[$_ - 1, $_].=reverse if @data[$_ - 1] > @data[$_] for 1 .. @data.elems - 1; Also, why not make the condition using both the slice and the reduction operator as we’ve already done in other parts of the function: @data[$_ - 1, $_].=reverse if [>] @data[$_ - 1, $_] for 1 .. @data.elems - 1; The next step is to reduce the range 1 .. @data.elems - 1, which is OK but contains a bit too many elements, which can be removed. The -1 part can be replaced with an open range: for 1 ..^ @data.elems; After which the elems call is not needed, and Perl 6 can do that for us: for 1 ..^ @data; And here’s the current state of the code: sub bubble-sort(@data) { @data[$_ - 1, $_] .= reverse if [>] @data[$_ - 1, $_] for 1 ..^ @data; bubble-sort(@data) unless [<=] @data; } my @data = 4, 5, 7, 1, 46, 78, 2, 2, 1, 9, 10; bubble-sort @data; say @data; Before we stop for now, here’s another modification that is possible. In general, recursion in Perl 6 can be implemented via multi-subs. Instead of branching inside the function, let the compiler do the job based on data: multi sub bubble-sort(@data where [<=] @data) {} multi sub bubble-sort(@data where ![<=] @data) { @data[$_ - 1, $_] .= reverse if [>] @data[$_ - 1, $_] for 1 ..^ @data; bubble-sort(@data); } If you don’t like the [<=] check (as it should scan the whole list), here’s a brute-force version with another loop layer: sub bubble-sort(@data) { for ^@data { @data[$_ - 1, $_] .= reverse if [>] @data[$_ - 1, $_] for 1 ..^ @data; } } The two for loops can be joined using the cross operator: sub bubble-sort(@data) { for ^@data X 1 ..^ @data { my$n = $_[1]; @data[$n - 1, $n] .= reverse if [>] @data[$n - 1, $n]; } } You are invited to work on this code further. I am really curious if and how this can be done even shorter (for example, how to re-use the slice). The source codes of the above-shown programs are available on GitHub. ## Weekly changes in and around Perl 6: 2019.24 Sequences of Int ### Published by liztormato on 2019-06-17T21:59:25 Aaron Sherman would like to see many standard integer number sequences to be part of the Math::Sequences module. He would like to see that so much, that it’s now decided that it will be the goal of the next Squashathon (5-7 July). See you there! (Facebook comments). ## Coding with a full toolset Damian Conway was again inspired by the Perl Weekly Challenge to write a blog post about the complete toolkit of Perl 6, with the motto: “All the right tools, right at hand.” Recommended reading, as always! ## Survey JJ Merelo set up another yearly Perl 6 User Survey. Please answer the questions you feel comfortable with if you have any interest in Perl 6 at all! You can also check last year’s results (Facebook comments). ## Grant Committee members Will Coleda is still looking for new Grant Committee members. The basic work is to review Grant proposals for Perl work every 2 months, discuss and vote on them. Also asked to be Grant managers occasionally, talking with and reporting on folks doing Grant work. Interested? Contact Will Coleda. ## Linker for Perl 6 Madeleine Goebel again reported on her GSOC progress in Building an ELF File, in which she describes her struggles to grok the intricacies of the ELF Header. ## Perl Weekly Challenge Blog posts in Perl 6 for the Perl Weekly Challenge #12: Challenge #13 is up for your perusal! ## Core developments • Ticket status of the past week. • Tobias Boege fixed the associativity of exponentiation in NQP. • Paweł Murias fixed the serialization/deserialization of native int/str arrays on the Javascript backend. • Ben Davies refactored global context handling on the Javascript backend. • Patrick Böker made sure that Rakudo’s core CompUnit::Repository will be in its own directory. • Vadim Belman fixed a problem with imports getting containerized in some situations. • Peter du Marchie van Voorthuysen fixed an issue with Channel.fail and the closed method. • Daniel Green fixed some methods that returned 0|1 instead of True|False. • Timo Paulssen fixed a problem with Date arithmetic with specially formatted Dates. • Christian Bartolomäus fixed a problem with Rakudo on the JVM backend, that was similarly fixed by Ben Davies for NQP. • And some more improvements and fixes in anticipation of the 2019.06 release. ## Questions about Perl 6 ## Meanwhile on Twitter ## Meanwhile on perl6-users ## Perl 6 in comments ## Perl 6 Modules New modules: Updated modules: ## Winding Down Not too shabby considering many core devs were away this week. Please check again next week for more Perl 6 news! ## Weekly changes in and around Perl 6: 2019.23 Complete Course ### Published by liztormato on 2019-06-10T19:47:40 Andrew Shitov has reworked his original grant proposal for a Complete Perl 6 Course With Excercises, which has just been published by The Perl Foundation for public comments (until June 14th), before the grant committee will vote on it. Please comment if you feel you need to! (Facebook comments). And this just in: Moritz Lenz has joined the TPF Grants Committee! Congrats! ## Perl 6 學習手冊 Or in other words “Perl 6 Study Manual”. A Chinese translation of brian d foy‘s Learning Perl 6. Now available from various outlets. Good to see Perl 6 become available to 1.4 billion people in their native language! And if you don’t want to buy the book, there’s always Perl 6 入门 (aka Perl 6 Introduction). ## Linker for Perl 6 Madeleine Goebel reported on her GSOC progress in The Linker for Perl 6, in which she goes into Symbol Resolution and Relocation, among other things. Her first goal is to create an ELF-executable from a “Hello World” program. Looking forward to future progress reports! ## Helping the Perl 6 documentation JJ Merelo points out in a thread on Twitter that the Perl 6 documentation doesn’t appear out of thin air just by itself. But that it needs dedicated people to keep up with new developments and problems reported by users. Yours truly would like to point out that working on the documentation is also an excellent chance to learn (aspects of) Perl 6 that you didn’t know before. So, if you see a problem in the documentation, please do not hesitate to report it, or even better, to create a Pull Request with your fix! Your contributions will be most appreciated by any current or future Perl 6 user. ## Comments on learning Perl 6 Someone using the nickname Shred_Alert has started an informal discussion about aspects of Perl 6 on Reddit in which the following opinions were posited: • Perl 6 feels natural to *nix people but not Windows people • Perl 6 (or any Perl) is write-only, line noise, messy because of regexes • Perl 6 is slow Warning: contains some very insightful comments! ## Perl 6 should mimic Python 3 Someone using the nickname marcm28 started a discussion on Reddit titled In 2020, Python3 will murder Python2. Perl6 should do the same thing on Perl5. Why?, which sparked quite a discussion. Warning: may need to don some fireproof clothing! ## TCPiP Newsletter The June Newsletter of The Perl Conference 2019 in Pittsburgh, PA has been published. Information about the Arrival Dinners, and the River Cruise now available. Please check it out if you’re going to attend or plan to do so! ## Perl Weekly Challenge Blog posts in Perl 6 for the Perl Weekly Challenge #11: Challenge #12 is up for your perusal! ## Core developments • Ticket status of the past week. • Patrick Böker fixed some issues with relocatable builds. • Daniel Green removed the profiling overhead from tallies in profiles. • Ben Davies implemented NQP ops to find out amount of free memory and total memory used and fixed some warnings when building the JVM backend. He also fixed an issue with [] being improperly parsed by the internal from-json logic. • Kaz Wesley fixed a problem with NativeCall‘s HAS attributes and several other related bugs. • Bart Wiegmans fixed a race condition with creating objects in JIT compilation. • Nick Logan added JIT-compilation for the NQP strfromcodes op. • Vadim Belman continued his work on the new configuration system and fixed is export on roles. • Paweł Murias fixed various issues on the Javascript backend. • Jonathan Worthington performed many optimizations related to matching in both NQP and Rakudo, making a typical split using regexes 40% faster. • Jan-Olof Hendig fixed a potential collision in bitmap flag values. • Vadim Belman made use v6.* work again: it now means to use the most modern version of Perl 6 (so currently the same as use v6.e.PREVIEW). • Elizabeth Mattijsen made the internal to-json logic about 2x as fast. • And quite a few other improvements and fixes. ## Questions about Perl 6 Only 2 to go to get to the 1111 Perl 6 questions mark on StackOverflow! ## Meanwhile on Facebook ## Meanwhile on Twitter ## Meanwhile on perl6-users ## Perl 6 in comments ## Perl 6 Modules New modules: Updated modules: ## Winding Down Wow. Perl 6 news and contributions to Perl 6 literally from all over the world! Please check in again next week for more Perl 6 news! ## Jo Christian Oterhals: Perl 6 Small Stuff #20: From anonymous one-line functions to full length MAINs with type and error… ### Published by Jo Christian Oterhals on 2019-06-07T17:57:04 ### Perl 6 Small Stuff #20: From anonymous one-line functions to full length MAINs with type and error checking There’s been a few weeks where I haven’t followed Perl Weekly Challenge, but luckily I found some time for Challenge #11 this week. Inititally both exercises looked they quite a bit of coding. But after a while it became apparaent that both could be solved with relatively simple one (or two) liners. But I also found that one-liners aren’t always very robust when it comes to error handling, so that gave me an opportunity to explore Perl6’s built-in type (and content) checking of command line input parameters, as well as how to use Perl 6’s built-in methods of generating friendly and readable Usage output. But before all of that we’ll start with the second excersise. Write a script to create an Indentity Matrix for the given size. For example, if the size is 4, then create Identity Matrix 4x4. I admit that I really don’t know what an Identity Matrix is or why it’s important, but the challenge provided a link to the Wikipedia page so that I’d at least understand how one would look. The look, simply told, is a square matrix where the main diagonal is filled with 1’s and the rest is filled with 0's. There are Perl 6 modules that are helpful, but I though I’d write a solution from the bottom up. Given the many cool features of Perl 6, generating an identity matrix is one line of code: my &idm = ->$s { gather for ^$s ->$y { take map { Int($_ ==$y) }, ^$s } }; Call the function like this: my @identity-matrix = idm(4); — and an array containing the identity matrix for your given size is returned. Note the & sigil. This is the sigil for Code, just as @ is for arrays and % is for hashes. This code utilises Perl 6’s anonymous functions. I could have written this as an ordinary sub, but in the context of a one-liner I think this functional variant looks better: ->$s defines an anonymous function with one parameter ($s). The interesting thing with the code sigil is that it’s optional when you call it. So you can call it both like my @id-mat = idm(4) and my @id-mat = &idm(4) . I think the former looks better. As you’ll have seen many times before on this blog I once again use the gather… take… functionality, which gives me the ability to easily build an array (I could have used push on an array too, but again — I feel gather-take looks better and is more concise). ^$s generates a list from 0 to the requested size, and the for loops trough that list. For row number one I loop through ^$s once more, and use map populate the identity matrix. If row number and column number is the same, the value is set to 1. If not 0. Instead of using an if sentence to check and set this, I combine everything into one with Int($_ == $y).$_ == $y returns True or False; Int() casts this into 1 or 0. The end result is an array of arrays containing your identity matrix. I could have stopped there, but since this is a Perl Weekly Challenge, one I thought I should prove that the function works. I do that by adding a second line of code that prints creates and prints a matrix: .join(' ').say for$idm(4);

The output is:

1 0 0 00 1 0 00 0 1 00 0 0 1

That’s it. Short and simple.

Having solved this second excersise, I returned to the first. That one looked like it’d need more code.

Write a script that computes the equal point in the Fahrenheit and Celsius scales, knowing that the freezing point of water is 32 °F and 0 °C, and that the boiling point of water is 212 °F and 100 °C.

I guess you could explain equal points in many ways. I think of it as the point where two lines cross. In this case: What’s the point where the celcius # and the fahrenheit # is identical and indicates the same absolute temperature?

Before we begin: The answer is 40 below zero (-40). But how do we compute this easily? We have to use the formula for conversion. If x represents the temperature in celcius, fahrenheit — y — is calculated like this:

y = 9/5*x + 32

When you look at it like this you’ll see that this looks exactly like a formula for a line, the y = mx + b we know from school — where m is the gradient or the slope of the line and b the y intercept, i.e. the value of y when x = 0. In this case m = 9/5 and b = 32.

The celcius scale is easy — y is equal to x always, so the formula looks like this:

y = x

To calculate the equal point we have to find the value of x where y is the same for both formulas. We can build the equation for solving this by taking the right sides of the equation for fahrenheit and celcius lines and put them together like this…

(the right hand side of y = x) = (the right hand side of y = 9/5 * x + 32)
x = 9/5 * x + 32

…and solve the equation. Since 9/5 is 1.8, let’s use that instead. So…

x = 1.8x + 32
x-1.8x = 32 # or 1.0x-1.8x = 32
-0.8x = 32 # -0,8x because 1.0–1,8
x = 32 / -0.8
x = -40

Now we know enough to not only solve a comparision between celcius and fahrenheit, but between celcius and whatever scale you’d throw at it. We have to implement x = 32 / -0.8 in some way. Or, figure out a way to make -0.8 generic, i.e. put in any scale and get this to work.

Here’s the key: the grade 9/5 or 1.8 is really just the fahrenheit degrees for the boiling point of water subtracted by the same for the freezing point, and then divided by the same for celcius (which happens to be 100). I.e. (212-32)/100 = 9/5 = 1.8. So if we have the freezing and the boiling point of any temperature scale other than celcius itself, the grade for that scale is:

(freezing point-boiling point) / 100

Now, if we peek at on of the steps of the calculation above, 1.0x-1.8x = 32, we see that vi have to subtract the grade from 1.0, so that we end up with the final

y-intercept (or freezing point) / 1.0-grade

…or more verbosely

freezing point / (1.0-((freezing point-boiling point) / 100))

If we assume that we have a variable $a for the freezing point and a variable$b for the boiling point, a Perl 6 one-liner would look like this:

-> $a,$b { say "Equal point: " ~ ( $b -$a == 100 ?? "None" !! $a / (1 - (($b - $a) / 100))) }(32,212); Substitute 32, 212 with anything you’d like. Let’s say you invented a temperature scale where 20 is freezing and 80 is boiling, swap (32,212) with (20,80). The result? 50. Quick and dirty — but effective. However the code above will crash under certain circumstances, and in many scenarios calculate that the equal point is at a temperature that simply does not exists (temperatures below absolute zero). So I thought that I’d implement this with a little error handling as well. And I think that version showcase some Perl 6 greatness that I haven’t touched upon earlier: File: eqpoint.p6Run: ./eqpoint.p6 32 212 ./eqpoint.p6 fahrenheit .................... #!/usr/bin/env perl6 multi MAIN( #= Compares the C. scale to any scale you dream up Real$f, #= freezing point in custom scale                                                                                       Real $b #= boiling point in custom scale ) { say "There is no equal point for this scale." and exit if$b - $f == 100; my$equal-point = $f / (1 - (($b - $f) / 100)); say "The calculated equal point is only theoretical as it is below absolute zero." if$equal-point < -273.15;  say "Equal point: $equal-point";} multi MAIN( #= Compares the C. scale to a named temperature scale Str$scale where { $_ ~~ m:i/^(fahrenheit|kelvin|rankin)$/ } #= Name of scale (Fahrenheit, Kelvin or Rankin)                 ) {  given $scale.fc { when "fahrenheit" { MAIN( 32 , 212 ); } when "kelvin" { MAIN(273.15, 373.15); } when "rankin" { MAIN(491.67, 671.67); } }} The use of the MAIN subroutines gives us some command line parsing for free. Not only that, but Perl 6 does type checking for us — in the case of the first multi MAIN, it ensures that the two parameters it receives actually are numbers. The second multi MAIN looks only for a single string, and only three strings in particular: fahrenheit, kelvin and rankin. The program won’t run if you try to start it with any other parameters. Instead it spits out a Usage text like this: Usage: eqpoint.p6 <f> <b> -- Compares the Celcius scale to whatever scale you dream up eqpoint.p6 <scale> -- Compares the Celcius scale to a given temperature scale <f> freezing point in this scale <b> boiling point in this scale <scale> Name of scale (Fahrenheit, Kelvin or Rankin) You’ll see that this custom usage text is caused by my use of #= in the code above. #= behind the sub routine name is the generic description of the usage. Behind parameter variables they describe that particular parameter. Now, when calculating the grade I do some value checking to avoid certain scenarios. First I check whether the difference between boiling and freezing in the user’s custom sale equals 100. If it does, that scale runs in parallell with the celcius scale. That means that the lines never cross and there is no equal point. If that’s the case I print out a message about that and exit. (The equation would have cause a divide by zero error had I move on) In addition I check whether the calulcated equation point is below -273.15 celcius. If it is, the point is below absolute zero and would never ever occur in this universe. In those scenarios I point out that the equal point is only theoretical. I think it’s extra fun when the challenges, well, challenge me to explore little-used areas of Perl 6. I got to do lots of that today. If you haven’t tried your hand at the Perl Weekly Challenge yet, I highly recommend it. ## Weekly changes in and around Perl 6: 2019.22 When steroids are a given ### Published by liztormato on 2019-06-03T15:17:30 Matthew ‘Matéu’ Stephen Stuckwisch has written a very nice blog post about the magic that given and when can bring to the readability of your code. It got quite some feedback! Even though smart-matching has been a thing in the Perl world for quite some time, the concept of smart-matching seemed to startle many people. See discussions on Hacker News, /r/perl6, /r/ProgrammingLanguages, Facebook and Twitter: Simon Proctor, Jonathan Stowe, Ted Davis, Amjad Masad, The Perl Shop. ## German Perl Workshop Videos The videos of the German Perl Workshop have been uploaded. People interested in Perl 6 presentations, should look at: ## French Perl Workshop In only a few weeks, there will be another French Perl Workshop with the motto “Perl Workshop — Act XV: three languages, three communities, one hackfest, one conference”. Strasbourg is a very nice city, with nice people and a nice venue, so yours truly is very tempted to attend. ## pyx Ralph Mellor started an interesting thought experiment to encourage real newbies to programming to try exploration of a language with a simplified Perl 6 syntax, which caused quite a discussion (and a name change from the originally suggested pyrl). Intriguing ## Concurrent evolution JJ Merelo reported on his poster about concurrent evolutionary algorithms in Perl 6, presented last April in Leipzig. ## More cryptography Arne Sommer continued his series of blog posts on DIY Cryptography with Perl 6 with Part 5: Real Text. Again, worth reading! ## Azure Automation Alexey Melehzik describes how you can automate the interaction with the Azure KeyVault using Sparrow6. ## Grant proposals! Another call for grant proposals has been made by The Perl Foundation. Please submit your ideas to make Perl 6 better! ## Quick Syntax Reference A new Perl 6 book was announced: Perl 6 Quick Syntax Reference by JJ Merelo. Too bad we will have to wait until November 11 before we can really get it (Facebook comments). ## Starting with the GSOC Madeleine Goebel describes what she did to come up to steam for her part in the Google Summer of Code in a blog post titled: “Getting Started: Developing for Perl 6“. It provides some very useful links to anybody wanting to get into helping out with Rakudo Perl 6 internals. Looking good so far! ## Whatever whenever does Wenzel P. P. Peppmeyer went a little deeper still into the inner workings of whenever in Whatever whenever does, inspired by Jonathan Worthington‘s answer to “Is whenever signal() in react block order dependent?“. ## Perl Weekly Challenge Blog posts in Perl 6 for the Perl Weekly Challenge #10: Challenge #11 is up for your perusal! ## Core developments • Ticket status of the past week and the month of May. • Jan-Olof Hendig updated the libuv library used by MoarVM, to version 1.29.1. • Jonathan Worthington made entry to all spesh’d/JIT-compiled frames faster. • Nick Logan made sure some UTF-8 decoding ops are JITted. • Patrick Böker made sure that CMP files are properly cleaned up on Windows. • Vadim Belman continued his work on the configuration subsystem. He also fixed some issues related to errors occurring during error reporting and assigning Nil to scalars with captured types. And finally, all his work in the past months to make use v6.e.PREVIEW work, was finally merged! • Timo Paulssen fixed a problem with the profiler. • Nick Wellnhofer fixed a problem with sorting 2-element lists with a mapper. • Elizabeth Mattijsen made the built-in to-json converter about 60% faster. She also made List.reverse about 4x as fast. • And quite a few other improvements and fixes. ## Questions about Perl 6 Only 5 to go to get to the 1111 Perl 6 questions mark on StackOverflow! ## Meanwhile on Facebook ## Meanwhile on Twitter ## Perl 6 in comments ## Perl 6 Modules New modules: Updated modules: ## Winding Down So many blog posts and so much discussion about Perl 6. Good to see! Please check in next week for more Perl 6 news. ## gfldex: Whatever whenever does ### Published by gfldex on 2019-05-31T10:24:59 Jnthn answered the question why $*IN.lines blocks in a react block. What isn’t explained is what whenever actually does before it starts blocking.

react {
whenever $*IN.lines { .say } } Looking at the syntax of a whenever block, we see that whenever takes a variable immediatly followed by a block. The only place where a structure like that can be defined is Grammar.nqp. rule statement_control:sym<whenever> { <sym><.kok> [ || <?{ nqp::getcomp('perl6').language_version eq '6.c' ||$*WHENEVER_COUNT >= 0
}>
|| <.typed_panic('X::Comp::WheneverOutOfScope')>
]
{ $*WHENEVER_COUNT++ } <xblock($PBLOCK_REQUIRED_TOPIC)>
}

Here the grammar just checks a few things without actually generating any code. So we head to Actions.nqp.

method statement_control:sym<whenever>($/) { my$xblock := $<xblock>.ast; make QAST::Op.new( :op<call>, :name<&WHENEVER>, :node($/),
$xblock[0], block_closure($xblock[1])
);
}

The whenever block is converted to a call to sub WHENEVER which we find in Supply.pm6.

sub WHENEVER(Supply() $supply, &block) { There we go. A whenever block takes its first argument of any type and calles .Supply on it, as long as Any is a parent of that type. In the case of $*IN that type will typically be whatever IO::Handle.lines returns.

Seq.new(self!LINES-ITERATOR($close)) To turn a Seq into a Supply Any.Supply calls self.list.Supply. Nowhere in this fairly long chain of method lookups (this can’t be fast) are there any threads to be found. If we want to fix this we need to sneak a Channel into $*IN.lines which does exactly that.

$*IN.^can('lines')[1].wrap(my method { my$channel = Channel.new;
start {
for callsame() {
last if $channel.closed;$channel.send($_) } LEAVE$channel.close unless $channel.closed; }$channel
});

Or if we want to be explicit:

use Concurrent::Channelify;

react {
whenever signal(SIGINT) {
say "Got signal";
exit;
}
whenever $*IN.lines⇒ { say "got line"; } }  We already use to indicate atomic operations. Maybe using prefix:<∥> to indicate concurrency makes sense. Anyway, we went lucky once again that Rakudo is implemented (mostly) in Perl 6 so we can find out where we need to poke it whenever we want to change it. ## Weekly changes in and around Perl 6: 2019.21 That’s Why ### Published by liztormato on 2019-05-27T14:05:37 Damian Conway admits that he’s been quietly playing along with the Perl Weekly Challenge in a blog post titled Why I Love Perl 6. In it, he shows the true power of TIMTOWTDI. It sparked quite a few discussions: /r/perl6, /r/programming, Perl 6 Facebook group, Facebook and Lobsters. ## Summer warming up The Google Summer of Code project for Perl 6 has started. Please welcome Zhongnian Tao, Madeleine Goebel, Antonio Gámiz Delgado and Joel Schüller, who will be improving several aspects of Perl 6 over the summer! Please support them if they have questions (although such a request is really superfluous ). ## Controlling the Profiler Timo Paulssen elaborates in A Close Look At Controlling The MoarVM Profiler about a small DSL called confprog that can be used to control what, when or where code profiling should take place in a running Perl 6 program (Reddit comments). It also contains a nice plug for Nadim Khemir‘s Data::Dump::Tree and some cat pictures. ## Auto-clickable in the browser Timo Paulssen also created a script to be used with GreaseMonkey (and the like) that will automatically make filenames with line numbers clickable, e.g. when browsing error reports on Github. See also screen shots: the blue parts are the auto-generated links. ## Sparrow6 progress report Alexey Melezhik reports on his progress moving the Sparrow ecosystem to Perl 6. It also introduces S6, a Sparrow6 command-line interface. A must read if you’re interested in Sparrow, or automation frameworks in general. ## Result 0.2.0 breaking changes Sam Gillespie announced breaking changes to his Result module, which introduces functional error handling in the same way that Rust does (Reddit comments). ## Perl Weekly Challenge Blog posts in Perl 6 for the Perl Weekly Challenge #9: Challenge #10 is up for your perusal! ## Core developments • Ticket status of the past week. • Madeleine Goebel fixed a problem with shutting down a libuv event loop. Her first MoarVM commit of many to come! • Samantha McVey added support for malloc_trim (to reduce MoarVM memory usage) on OS’s that support it. • Vadim Belman continued his work on revamping the configuration process, Oleksii Varianyk and Kaz Wesley helped him with that. • Elizabeth Mattijsen added a :check named parameter to EVAL, to only have the code compile, but not be executed. And she fixed an issue with initializing native arrays from arrays with deleted elements. • Leon Timmermans fixed an issue that affected users that implement a RUN-MAIN. • Tadeusz Sośnierz fixed a bug in Date.truncate-to. • And some other improvements and fixes. ## Questions about Perl 6 Crossed the 1100 Perl 6 questions mark on StackOverflow! ## Meanwhile on Facebook ## Meanwhile on Twitter ## Perl 6 in comments ## Perl 6 Modules New modules: Updated modules: ## Winding Down Damian Conway showing his face again in Perl 6 land, after having seemingly retired from Perl down under, made my day. Meanwhile, yours truly has been working on an API for profiling Perl 6 code. More on that next week. See you then! ## my Timotimo \this: A Close Look At Controlling The MoarVM Profiler ### Published by Timo Paulssen on 2019-05-22T14:41:13 This is slightly tangential to the Rakudo Perl 6 Performance Analysis Tooling grant from The Perl Foundation, but it does interact closely with it, so this is more or less a progress report. The other day, Jonathan Worthington of Edument approached me with a little paid Job: Profiling very big codebases can be tedious, especially if you're interested in only one specific fraction of the program. That's why I got the assignment to make the profiler "configurable". On top of that, the Comma IDE will get native access to this functionality. The actual request was just to allow specifying whether individual routines should be included in the profile via a configuration file. That would have been possible with just a simple text file that contains one filename/line number/routine name per line. However, I have been wanting something in MoarVM that allows much more involved configuration for many different parts of the VM, not just the profiler. Obligatory cat photo. That's why I quickly developed a small and simple "domain-specific language" for this and similar purposes. The language had a few requirements: • It should not be possible to write a program that loops infinitely • It should have access to MoarVM internal data that is normally not accessible • It shouldn't allow changing internal data so that things could break • It should allow reasonably complex calculations and comparisons There's also some things that aren't necessary: • It doesn't need to be very pleasant to write • It doesn't have to be succinct • It doesn't have to have every feature under the sun While thinking about what exactly I should build – before I eventually settled on building a "programming language" for this task – I bounced back and forth between the simplest thing that could possibly work (for example, a text file with a list of file/line/name entries) and the most powerful thing that I can implement in a sensible timeframe (for example, allowing a full NQP script). A very important realization was that as long as I require the first line to identify what "version" of configuration program it is, I could completely throw away the current design and put something else instead, if the need ever arises. That allowed me to actually commit to a design that looked at least somewhat promising. And so I got started on what I call confprog. Here's an example program. It doesn't do very much, but shows what it's about in general: version = 1 entry profiler_static: log = sf.name; profile = sf.name eq "calculate-strawberries" profile |= sf.cu.filename eq "CustomCode.pm6"  The entry decides which stage of profiling this program is applied to. In this case, the profiler_static means we're seeing a routine for the first time, before it is actually entered. That's why only the information every individual invocation of the frame in question shares is available via the variable sf, which stands for Static Frame. The Static Frame also allows access to the Compilation Unit (cu) that it was compiled into, which lets us find the filename. The first line that actually does something assigns a value to the special variable log. This will output the name of the routine the program was invoked for. The next line will turn on profiling only if the name of the routine is "calculate-strawberries". The line after that will also turn on profiling if the filename the routine is from is "CustomCode.pm6". Apart from profiler_static, there are a couple more entry points available. • profiler_static, which runs the first time any routine is encountered, and stores the decision until the profile run is finished. • profiler_dynamic will be evaluated every time a routine gets entered that would potentially be profiled otherwise (for example because a profiler_static program turned it on, or because there is no profiler_static program, which means that every routine is eligible for profiling). Anything about the current execution context is available here, including the name of the current routine, the count and types of arguments passed, the routine that called this routine, and so on. Information that seems less immediately useful like how many times has the GC run yet? or how many threads are there right now? is also available for very special cases. • heapsnapshot will be run every time a GC run happens. It lets the user decide whether a GC run should result in a heap snapshot being taken or not, based on whether the run has been a minor or major collection, and arbitrary other factors, including time since start of the program, or time since the last heap snapshot was taken. • spesh and jit I'm not entirely sure about yet. I want to allow turning the specializer or the jit off completely for certain routines, but I also want to offer control over individual optimizations in spesh (turning off inlining for one particular routine, preventing scalar replacement but only for Rat types, ...) and behaviour of the jit (use the template jit for everything except one specific piece of one particular routine, ...). How exactly this should look in the program, I do not know yet. It's also probably not going to be very interesting for end-users, but finding bugs in spesh and the JIT could be easier if you can pinpoint (even automatically!) the exact place something goes wrong down to "which optimization at what position makes the difference". The syntax is still subject to change, especially before the whole thing is actually in a released version of MoarVM. There is a whole lot of other things I could imagine being of interest in the near or far future. One place I'm taking inspiration from is where "extended Berkeley Packet Filter" (eBPF for short) programs are being used in the linux kernel and related pieces of software: Oversimplifying a bit, BPF was originally meant for tcpdump so that the kernel doesn't have to copy all data over to the userspace process so that the decision what is interesting or not can be made. Instead, the kernel receives a little piece of code in the special BPF language (or bytecode) and can calculate the decision before having to copy anything. eBPF programs can now also be used as a complex ruleset for sandboxing processes (with "seccomp"), to decide how network packets are to be routed between sockets (that's probably for Software Defined Networks?), what operations a process may perform on a particular device, whether a trace point in the kernel or a user-space program should fire, and so on. So what's the status of confprog? I've written a parser and compiler that feeds confprog "bytecode" (which is mostly the same as regular moarvm bytecode) to MoarVM. There's also a preliminary validator that ensures the program won't do anything weird, or crash, when run. It is much too lenient at the moment, though. Then there's an interpreter that actually runs the code. It can already take an initial value for the "decision output value" (great name, isn't it) and it will return whatever value the confprog has set when it runs. The heap snapshot profiler is currently the only part of MoarVM that will actually try to run a confprog, and it uses the value to decide whether to take a snapshot or not. Next up on the list of things to work on: • There's barely any operators supported, like for comparing numbers, or arithmetic. • There is not yet anything for floating point numbers, but I would like to offer support for that, since it's how we usually represent time. • Values can not yet be persisted between runs, so you can't, for example, store the time of the last run, or how often a given decision was taken, etc. • Printing to a log by assigning to a variable named "log" looks very odd, so the language will want to have syntax for function calls. • Random decision making would be nice, for stochastic things. Apart from improvements to the confprog programming language, the integration with MoarVM lacks almost everything, most importantly installing a confprog for the profiler to decide whether a frame should be profiled (which was the original purpose of this assignment). After that, and after building a bit of GUI for Comma, the regular grant work can resume: Flame graphs are still not visible on the call graph explorer page, and heap snapshots can't be loaded into moarperf yet, either. Thanks for sticking with me through this perhaps a little dry and technical post. I hope the next one will have a little more excitement! And if there's interest (which you can signal by sending me a message on irc, or posting on reddit, or reaching me via twitter @loltimo, on the Perl 6 discord server etc) I can also write a post on how exactly the compiler was made, and how you can build your own compiler with Perl 6 code. Until then, you can find Andrew Shitov's presentations about making tiny languages in Perl 6 on youtube. I hope you have a wonderful day; see you in the next one! - Timo PS: I would like to give a special shout-out to Nadim Khemir for the wonderful Data::Dump::Tree module which made it much easier to see what my parser was doing. Here's some example output from another simple confprog program: [6] @0 ├ 0 = .Label .Node @1 │ ├$.name = heapsnapshot.Str
│ ├ $.type = entrypoint.Str │ ├$.position is rw = Nil
│ └ @.children = [0] @2
├ 1 = .Op .Node @3
│ ├ $.op = =.Str │ ├$.type is rw = Nil
│ └ @.children = [2] @4
│   ├ 0 = .Var .Node @5
│   │ ├ $.name = log.Str │ │ └$.type = CPType String :stringy  @6
│   └ 1 = String Value ("we're in") @7
├ 2 = .Op .Node @8
│ ├ $.op = =.Str │ ├$.type is rw = Nil
│ └ @.children = [2] @9
│   ├ 0 = .Var .Node @10
│   │ ├ $.name = log.Str │ │ └$.type = CPType String :stringy  §6
│   └ 1 = .Op .Node @12
│     ├ $.op = getattr.Str │ ├$.type is rw = CPType String :stringy  §6
│     └ @.children = [2] @14
│       ├ 0 = .Var .Node @15
│       │ ├ $.name = sf.Str │ │ └$.type = CPType MVMStaticFrame  @16
│       └ 1 = name.Str
├ 3 = .Op .Node @17
│ ├ $.op = =.Str │ ├$.type is rw = Nil
│ └ @.children = [2] @18
│   ├ 0 = .Var .Node @19
│   │ ├ $.name = log.Str │ │ └$.type = CPType String :stringy  §6
│   └ 1 = String Value ("i am the confprog and i say:") @21
├ 4 = .Op .Node @22
│ ├ $.op = =.Str │ ├$.type is rw = Nil
│ └ @.children = [2] @23
│   ├ 0 = .Var .Node @24
│   │ ├ $.name = log.Str │ │ └$.type = CPType String :stringy  §6
│   └ 1 = String Value ("  no heap snapshots for you my friend!") @26
└ 5 = .Op .Node @27
├ $.op = =.Str ├$.type is rw = Nil
└ @.children = [2] @28
├ 0 = .Var .Node @29
│ ├ $.name = snapshot.Str │ └$.type = CPType Int :numeric :stringy  @30
└ 1 = Int Value (0) @31


Notice how it shows the type of most things, like name.Str, as well as cross-references for things that appear multiple times, like the CPType String. Particularly useful is giving your own classes methods that specify exactly how they should be displayed by DDT. Love It!

## gfldex: Nil shall warn or fail but not both

As announced earlier I went to write a module to make Nil.list behave a little better. There are basicly two way Nil could be turned into a list. One should warn the same way as Nil.Str does and the other should end the program loudly. Doing both at the same time however does not make sense.

There are a few ways this could be done. One is augmenting Nil with a list method and have this method check a dynamic variable to pick the desired behaviour. That would be slow and might hurt if Nil.list is called in a loop. The other is by using a custom sub EXPORT and a given switch.

# lib/NoNilList/Warning.pm6
use NoNilList 'Warning';
# lib/NoNilList/Fatal.pm6
use NoNilList 'Fatal';
# lib/NoNilList.pm6

sub EXPORT($_?) { given$_ {
when 'Warning' {
# augment Nil with a warning .list
}
when 'Fatal' {
# augment Nil with a failing .list
}
default {
die 'Please use NoNilList::Warning or NoNilList::Fatal.';
}
}

%() # Rakudo complains without this
}

Now use NoNilList; will yield a compile time error with a friedly hint how to avoid it.

I left the augmenting part out because it does not work. I thought I stepped on #2779 again but was corrected that this is acually a different bug. Jnthn++ fixed part of that new bug (Yes, Perl 6 bugs are so advanced they come in multiple parts.) and proposed the use of the MOP instead. That resulted in #2897. The tricky bit is that I have to delay augmentation of Nil to after the check on $_ because augment is a declarator and as such executed at compile time — in a module that can be months before the program starts to run. Both an augment in an EVAL string and the MOP route would lead there. I wanted to use this module as my debut on 6PAN. That will have to wait for another time. If you find a bug please file it. It will lead to interresting discoveries for sure. ## Jo Christian Oterhals: Perl 6 small stuff #19: a challenge of niven numbers and word ladders ### Published by Jo Christian Oterhals on 2019-05-10T12:05:02 After a week’s hiatus I’ve returned to the Perl Weekly Challenge. This is the seventh challenge so far. As before there are two excercises. The first one is to calculate all niven numbers between 0 and 50. Niven numbers are integers that are divisible by the sum of its digits. I.e. 47 is a niven number if 47 / (4 + 7) is an integer without remainder (it’s not, as the result is 4.2727; 48 is, as 48/(4+8) is 4). These kinds of operations are called modulo operations or mods. Most programming languages use the operator % for this, so that you can check for this by testing whether 47 % (4 + 7) == 0. But Perl6 has an additional operator, a shorthand for the former, and is written as 47 %% (4 + 7). %% returns True or False and lets you ignore the == 0 part. To find the individual digits of a number we use the .comb method. That returns a list with the individual digits (I’ve used .comb on earlier challenges as well, but then to divide strings into their individual characters). On the combed list we use something called a reduction operator, specifically [+]. What this does is that it loops through the list and adds all of the numbers within. Where you once would have written something like my$result = 0; $result +=$_ for 1, 2, 3, 4, 5; say $result; you can now do the same with a simple say [+] 1, 2, 3, 4, 5 . There are lots of these reduction operators. They save time, so look into them. Knowing these two things it becomes apparaent that calculating Niven numbers with Perl 6 can be done with a simple and reasonably readable one-liner. .say if$_ %% [+] .comb for 0..50;

That’s it.

The second excercise is a little harder: “A word ladder is a sequence of words [w0, w1, …, wn] such that each word wi in the sequence is obtained by changing a single character in the word wi-1. All words in the ladder must be valid English words. Given two input words and a file that contains an ordered word list, implement a routine (e.g., find_shortest_ladder(word1, word2, wordlist)) that finds the shortest ladder between the two input words.”

There are a lot of givens and requirements, so you should look at the web page of challenge 7 to see them all. But the most important of them is that the two words you want to create a ladder between, has to be of the same length. And the list of words you use for this must also only contain words of that length. Additionally you’re also only supposed to compare lowercase words.

Knowing all that, here’s my attempt at excercise 2.

File: challenge2.p6
#!/usr/bin/env perl6
# This script needs a plain-text dictionary of words to work.# MacOS has one built-in here. May have different location# or not be present at all on other systems.
constant $DICTIONARY = "/usr/share/dict/words";subset Str-lc of Str where * ~~ /^<lower>+$/;sub MAIN(Str-lc $word1, Str-lc$word2 where {        sprintf("%s", $word1).chars == sprintf("%s",$word2).chars      },     Bool :$list-all = False) {  my @ladders = []; my @words =$DICTIONARY.IO.lines.grep(    { $_ eq$_.lc && $_.chars ==$word1.chars }  );  gen-word-ladder($word1, [$word1 ], {});  for @ladders.sort({ $^a.elems cmp$^b.elems }).kv -> $i, @ladder { say @ladder.join(" -> "); exit if !$list-all && $i == 0; } sub gen-word-ladder(Str$word, @ladder is copy, %seen is copy) {    for $word.comb.kv ->$index, $character { my$r3 = $word.substr(0,$index) ~ "." ~         $word.substr(*-($word.chars - ($index + 1))); my$r5 = ( '.' x $index ) ~$word2.substr($index,1) ~ ( '.' x$word.chars - 1 - $index) ; for @words.grep( / <$r3> /).grep( / <$r5> / ) ->$x {        if ! %seen{$x}.defined &&$x ne $word { %seen{$x} = True;          @ladder.push($x); @ladders.push(@ladder) if$x eq $word2; gen-word-ladder($x, @ladder, %seen);        }      }    }  }
}

Basically, this is a solution using recursion. I won’t go into detail about what the code does, I’m just going to comment on Perl6-isms that may be interesting for newcomers.

The main sub routine here is MAIN. Subroutines named MAIN is a special kind of subroutine. This routine is automatically called when a script is invoked. Also, the parameters you define in the sub routine are parameters that will be required on the command line when you run the script. This definition says that two words are required. Additionally there’s an optional flag that can be used (--list-all) if you also want to the rest of the possible word ladders too and not only the shortest. (If you want several different signatures to be possible, use the multi keyword and define multiple MAIN’s)

Note the use of subset and where here. They define constraints. Here I define a subset of Str, called Str-lc, that requires the string to be lowercase. You can check for all sorts of things. That all this is built-in saves you from writing lots of error checking code.

The nice thing about this, if you use the subset types in the context of MAIN, is that Perl6 will check your CLI parameters for you. If you’ve made any errors it will exit and print an automatically generated Usage message. Again, less typing and more done for you automatically!

Now, in the declaration of MAIN itself I also use the whereclause. Use that to define constraints that are ad-hoc and not repeated or constraints that can’t be defined as subsets due to a dynamic nature.

You should note my use of sprintf here. You may find it redundant that I use sprintf to convert a string to a string here. Due to an error in the Rakudo Star 2019.03.01 distribution of Perl 6, the comparison $word1.chars ==$word2.chars doesn't work. It throws this error:

Cannot call method 'chars' on a null object

The only time I don’t get this error is when the two strings actually are of equal length. A workaround is sprintf. If you run a string through sprintf(“%s”), a brand new Str object is returned. And on them we can run instance methods. So then I also get checks for length equality built into the MAIN declaration. As the above mentioned error isn’t present in earlier versions of Rakudo, it’ll be fixed again in the future so that this absurd conversion round-robin is not necessary.

In any case, with very little code you get lots of fancy functionality. So when you run the script with correct parameters, all runs as expected…

# ./challenge2.p6 --list-all horse lousyhorse -> house -> louse -> lousyhorse -> house -> louse -> housy -> lousyhorse -> house -> horsy -> housy -> lousy

…while a reasonably understandable usage report is printed if not…

# ./challenge2.p6 --list-all horseUsage:  ./challenge2.p6 [--list-all] <word1> <word2>

Other noteworthy stuff…?

Do you see the .kv used in a couple of the for loops? Normally .kv returns the keys and values of a hash interleaved [1]. But used in the context of a list/an array, the key is an integer representing the index while the value is, as you’d expect, the value. It’s as if you’ve got an automatic $counter. As it is. Now, it’s just a little thing, but I’ve programmed enough$counter++ lines in my life to appreciate this.

Another handy shorthand is this: $DICTIONARY.IO — it’s the easiest way to open a file for reading. If your string correspons to a file, adding .IO behind the Str variable name converts it to an IO::Path object which adds lots of stuff, including the .lines method that opens the file and returns the contents line by line [2]. Lastly I’d like to point you to / <$r3> / and / <$r5> /. The < and > surronding a variable tells Perl6 that you want whatever’s in them to be interpolated as a regexp. So that… my$r5 = "al.ha";
say "alpha" ~~ /$r5/; # Output: Nil say "alpha" ~~ /<$r5>/; # Output: ｢alpha｣   (match)

This means you can build regexpes dynamically and very simply. It’s a nice function, although — maybe — a function with EVAL like downsides.

NOTES

[1] Over on Reddit /u/liztormato pointed out that my explanation about what the .kv method of an Array/List does, was wrong. I.e. I understood its effects, but misunderstood what it actually did to achieve those effects. So this explanation is Liz’s not mine.

[2] This too has been clarified by Liz on Reddit. I have to say that this is the nice thing about blogging about Perl 6: The community is very welcoming, so it’s no stress to lay out all my misunderstandings. I’ve really learned so much by writing these articles. Thanks to everyone who has pointed out errors and misunderstandings and maybe even suggested improvements.

## Strangely Consistent: Refactoring the universe

I'm here to share a thing I'm working on, and chew gum; and I'm all out of gum. The purpose of this post is both to break a disagreeable silence that has befallen this blog, and to be able to geek out about a niche topic here in writing, partially sparing friends and family.

I'm currently refactoring 007's type system in a branch. Basically, ever since 007 was created, a type in 007-land has corresponded to a class/role in Perl 6, the host system that's implementing 007.

Here's what the implementation of Int looks like currently:

class Val::Int does Val {
has Int $.value; method truthy { ?$.value;
}
}

And here's what it looks like as I'm doing the refactor:

constant TYPE is export = {};

BEGIN {
# ...
TYPE<Int> = make-type "Int", :backed;
# ...
}

So, instead of corresponding to types on the host level, all the 007 types are about to correspond to values. The former implementation was the one that felt obvious at the time (four-plus years ago), but it's become blindingly, painstakingly obvious that it really needs to be the latter.

Here's why: as soon as you want to implement class declarations in 007, in the former model you also need to bend over backwards and come up with an entirely new type in the host system. The Perl 6 code to do that looks like this:

return $.type.new(:type(EVAL qq[class :: \{ method attributes \{ () \} method ^name(\$) \{ "{$name}" \} \}])); Which is... even someone as EVAL-positive as I wishes for a less clunky solution. In the new model, a new class comes down to calling make-type and dropping the result in that TYPE hash. (Wait. Or not even dropping it in the TYPE hash. That hash is only for things used by 007 itself, not for user types.) This is a refactor I've tried once before, back in 2017, but I failed back then because the code got too complicated and ran too slow. This time around I have a much better feeling. By the way, there's also an is-type subroutine, and similarly a make-int and an is-int subroutine, and so on for every registered type. I figure why not wrap those simple things up in very short function names. So far that turns out to have been a very good decision. "Fold the language of your domain model into your code", and so on. This is one of the things I'm doing better this time around; last time one of the problems was that each line I touched got longer and messier because there were more layers of indirection to dig through. Concerns were scattered all over the place. This time, it feels like the codebase is getting simpler thanks to those named subroutines. Maybe it can be likened to putting all your database-specific code in one place. I sometimes get slight vertigo due to the bootstrapping aspects of this type system. One example: Object is an instance of Type, but the base class of Type is Object — a circularity. But, it turns out, if you have absolute power over the object graph, you can always bend things to your will: BEGIN { TYPE<Type> = _007::Value.new(:type(__ITSELF__), slots => { name => "Type" }); TYPE<Object> = make-type "Object"; { # Bootstrap: now that we have Object, let's make it the base of Type and Object TYPE<Type>.slots<base> = TYPE<Object>; TYPE<Object>.slots<base> = TYPE<Object>; } # ... } I'm slightly reminded of a thing Gilad Bracha wrote once (which I can't find the exact quote for, unfortunately): that if mutual dependencies and recursive definitions are something that stump you, what you need is a healthy dose of letrec. It's twisty, yes, but it's basically a solved problem. Like last time, I'm tackling the big task in small steps, one type at a time. I feel I've learned this from Martin Fowler's concept of asset capture. The idea is to end up back with a running system with passing tests often. I do this by replacing one old thing at a time by a new thing. Sounds obvious, but I'm actually not sure I would have been sensible enough on my own to tackle it this way, had I not known about asset capture. One drawback is that you're sort of running the old system and the new system in parallel, as the old one is being phased out. Only once the whole thing has been asset-captured can complexity associated with the old system be completely removed. It's a pleasant way to work. To me it's been at least a partial answer to the problem of the big rewrite. If we're free to refactor the insides, we can successively arrive at a point where the new better thing has completely replaced the old thing. The way there is allowed to be a little bit more complex (on the inside) than either endpoint. Importantly, you keep a running system throughout. I don't have a concluding thought, except to say that I just managed to asset-capture arrays. Which is harder than it sounds, because arrays are everywhere in the compiler and the runtime. ## gfldex: MONKEY see no Nil ### Published by gfldex on 2019-05-03T23:14:50 In a for loop Nil is turned into a List with one Element that happens to be Any. This really buged me so I went to find out why. As it turns out the culprit is the very definition of Nil is Cool. To be able to turn any single value into a List Cool implements method list(). Which takes a single values and turns that value into a List with that one value. Nil indicates the absense of a value and turning it into a value doesn’t make sense. Luckily we can change that. use MONKEY-TYPING; augment class Nil { method list() { note 'Trying to turn Nil into a list.'; note Backtrace.new.list.tail.Str; Empty } } Nil.HOW.compose(Nil); sub niler() { Nil } for niler() { say 'oi‽' }  We can’t just warn because that would show the wrong point in the stack trace. So we note (which also goes to$*ERR) and pull the last value out of the Backtrace.

Interestingly Failure throws both in .list and in .iterator. Nil implements push, append, unshift and prepend by immediatly die-ing. Adding more to nothing is deadly but turning nothing first into something vaguely undefined and then allowing to add more stuff to it is inconsistent at best. What leads me to believe that Nil.list as it is specced today is just an oversight.

At least I can now write a simple module to protect my code from surprising Nils.

## Jo Christian Oterhals: Perl 6 small stuff #18: applying permutations to an anagram challenge

The Perl Weekly Challenge has come to its fifth instalment. This time both challenges has to do with anagrams.

Challenge 1: Write a program which prints out all anagrams for a given word. For more information about Anagram, please check this wikipediapage.
Challenge 2: Write a program to find the sequence of characters that has the most anagrams.

Now — the challenge doesn’t mention whether the solutions are supposed to find one-word anagrams or multi-word anagrams. Later, on Twitter, told us to write one-word solutions, but at that point I’d finished my answer that supports both one and two-word solutions.

### Challenge 1

Let’s start with the simplest scenario: one-word anagrams. Let’s say you have a word — wolf — and want to find its anagrams (it has only one, “flow”). A good starting point is to split the word into its parts, the characters, so that you can manipulate it to find the anagram. For the splitting you can use the routine .comb on the string. That returns a List with the characters.

Now — the List object has an interesting routine, .permutations. You won’t find an equivalent to this in Perl 5 without resorting to modules. This great function is just built-in to Perl 6, so let’s use it: A naive way of trying to find anagrams would be to apply permutations on the list.

> "wolf".comb.permutations((f l o w) (f l w o) (f o l w) (f o w l) (f w l o) (f w o l) (l f o w) (l f w o) (l o f w) (l o w f) (l w f o) (l w o f) (o f l w) (o f w l) (o l f w) (o l w f) (o w f l) (o w l f) (w f l o) (w f o l) (w l f o) (w l o f) (w o f l) (w o l f))

And sure enough, “flow” is the first result! However, permutations are just that: Permutations. There is no guarantee that the permutation in question actually is a word. So we have to introduce a dictionary to check the permutations against.

On my Mac (macos 10.14.1), there’s a dictionary of a couple of hundred thousand words in the file /usr/share/dict/words. Many Linux distributions has the same, and for those distributions and operating systems which don’t, dictionaries are easy to find on line. In this example I assume the existence of that dictionary file.

What the code below does: Line 1 reads the dictionary and stores it in memory. I convert it to a hash where each word has the value True. This makes it easy to check a word’s existence: Just try to access the element in the %dict variable. If the returned value is True, then we’re OK.

Line 2 tries to figure out whether you provide words to the script through stdin (using a pipe) or as arguments on the command line. It then loops through the words provided and tries to find anagrams that matches (lines 3 and 4).

File: anagrams.p6
my %dict = "/usr/share/dict/words".IO.lines().map({ $_.lc => True});for @*ARGS ?? @*ARGS !! !$*IN.t ?? lines() !! Nil -> $w { say "$w - $_" if %dict{$_} and $_ ne$w    for unique do for $w.lc.comb.permutations {$_.join };}

There you have it: The solution to Challenge 1 is four lines of code (in reality three lines, because I added a newline for readability’s sake).

# Output:
$perl6 anagrams.p6 fowl tornadofowl - flowfowl - wolftornado - odoranttornado - donator A couple of noteworthy things here: For some reason .permutations doesn’t guarantee that a permutation is unique. The returned list can therefore contain duplicates. So I use unique to filter out duplicates. Note$*IN.t; the special variable $*IN is Perl 6’s name for STDIN. It has a few methods and properties among them the method .t. That returns True or False depending on whether the program can expect standard input to come from the terminal (tty) or through a pipe. Note: If you prefer, you could rewrite line 3 so that no if’s or for’s are used at all. It would look something like this: my$dict = "/usr/share/dict/words".IO.lines()>>.lc.Set;for @*ARGS ?? @*ARGS !! ! $*IN.t ?? lines() !! '' ->$w {  $w.lc.comb.permutations>>.join.grep({$dict{$_} and$_ ne $w }).map({ "$w\t$_\n" }).unique.join.say;} As you can see I’ve used >> (.hyper) here. That thing tries to run the .join in batches in parallell, and therefore potentially much faster. There’s no notable speed difference here, but I thought I’d start implementing it anyway so that it’s there when it can have effect. ### Challenge 2 Now — how can we figure what letter combination returns the most anagrams? By starting with a script that begins exactly like the first — line 1 reads the dictionary file and stores it in a hash. Line 2 declares a new hash. We will use this to create a kind of index with fast lookup, kind of a reverse dictionary. In line 3 I loop trough every word in the dictionary and sort the characters in the word alphabetically. This sorted version becomes the key of our index; every word corresponding to this key will be stored in the same element of the hash in a list of corresponding words. With what we have now we try to figure out something about max anagrams, as the assignment calls for. Line 4 and 5 takes this reverse dictionary and sorts the keys in descending order of the number of corresponding words. Line 5 just checks the first in the list — i.e. the character combination with the most anagrams — and stores the number of anagrams in the variable$max-anagrams.

File: max-anagrams.p6
my %dict = "/usr/share/dict/words".IO.lines().map({ $_.lc => True});my %rev-dict;push %rev-dict{$_.comb.sort.join}, $_ for %dict.keys;my @variation = reverse sort { %rev-dict{$_}.elems }, %rev-dict.keys;my $max-anagrams = %rev-dict{@variation[0]}.elems;for @variation { last if %rev-dict{$_}.elems != $max-anagrams; say "$_ ($max-anagrams): " ~ %rev-dict{$_}.join(',');}

Lines 6 through 11 iterates through the sorted list and prints some information of every character sequence with the same number of anagrams as the sequence we got the numbers for in line 5.

Line 7 makes sure that as soon as we iterate to a character sequence with less than the max number of anagrams, we break out of the loop and end the program.

# Output:
$perl6 max-anagrams.p6aelps (10): lepas,sepal,pales,spale,speal,elaps,slape,salep,lapse,sapleagnor (10): argon,orang,grano,rogan,groan,ronga,goran,angor,nagor,organ That’s it. I guess somebody with a better understanding of Perl 6 than me could — perhaps — utilize some kind of parallelism to speed up the index generation. My implementation is fairly slow, so don’t be alarmed if the program seems to hang. It hasn’t. It’s just computing. Added later: Somebody actually read this and tried to implement it with parallelism! See Parallel permutations by gfldex. It so fun to see that somebody reads what I write and also interacts with it. Thanks again to Mohammad Anwar for running the Perl Weekly Challenge. I find it inspiring. BTW, I have versions of the above that creates two-word anagrams as well. Those utilize the .combinations routine combined with set arithmetics, two great features that are built-in to Perl 6. Perl 6 is a BIG language, but it’s when you find features like the aforementioned permutations and combinations that you understand why. After a while you get to expect that if you think that something should have been supported by base Perl 6, it surprisingly often is! More stuff is possible without resorting to third-party libraries, and with consistent and fairly elegant design. What’s not to love? ## gfldex: Parallel permutations ### Published by gfldex on 2019-04-27T15:31:45 Jo Christian Oterhals asked for a parallel solution for challenge 2. I believe he had problems to find one himself, because his code sports quite a few for loops. By changing those to method call chains, we can use .hyper to run at lease some code concurrently. use v6.d; constant CORES =$*KERNEL.cpu-cores;

# workaround for #1210
sub prefix:<[max]>(%h){ %h.sort(-*.value).first }

my %dict = "/usr/share/dict/words".IO.lines.map({ .lc => True });

my %seen;
%dict.keys».&{ %seen{.comb.sort.join}++; };

with [max] %seen {
say .value, .key.comb.hyper(:batch(1024), :degree(CORES)).permutations».join.grep({ %dict{$_}:exists }).Str } My approach is a little different then Jo’s. I don’t try to keep all combinations around but just count the anagrams for each entry in the word list. Then I find a word with the most anagrams (there are more candidates with the same amount that I skip) and reconstruct the anagrams for that word. The only operation where any form of computation happens is the generation of permutations. Anything else is just too memory bound to get a boost by spinning up threads. With the .hyper-call the program is a tiny wee bit faster then with just one thread on my Threadripper box. A system with slow cores/smaller caches should benefit a little more. The main issue is that the entire word list fits into the 3rd level cache. With a bigger dataset a fast system might benefit as well. In many cases multi-core systems are fairy dust, which makes the wallets of chip makers sparkle. Wrangling Hashs seams to be one of those. ## gfldex: Nil is a pessimist ### Published by gfldex on 2019-04-24T19:23:05 Guifa was unhappy with $xml.elements returning a list with one undefined element if there are no child nodes. That led me to the conclusion that Nil is only halve Empty.

Let’s consider this piece of code.

sub nilish() { Nil };
for nilish() { say 'oi‽' }

my $nil := nilish(); for$nil { say 'still oi‽' }

sub no-return() { }
for no-return() { say 'even more oi‽' }

sub return-a-list( --> List:D ) { Nil }
for return-a-list() { say 'Should this happen?' }

# OUTPUT:
# oi‽
# still oi‽
# even more oi‽
# Should this happen?

We are iterating over the special container-reset-value called Nil because there is no container to reset. Since for binds to its arguments it binds to Nil. A type object, even a very special one as Nil, is a single item which is treated as a list with one element by for.

We can solve this problem by a multi sub that turn unreasonable values into the empty list.

multi sub undefined-to-empty(Nil) { Empty }
multi sub undefined-to-empty(@a where all @a.map(!*.defined)) { Empty }
multi sub undefined-to-empty(\item) { item }

for undefined-to-empty(Nil) { say 'nothing happens' }
for undefined-to-empty((Any,)) { say 'nothing happens' }

By adding a candidate that checks if there are only undefined values in a list we can also solve guifas problem.

This is of cause just a shim. The real solution is to stop turning nullinto Nil in native bindings. If you write a sub that has to return a list but can’t either fail or return Empty if there is nothing to return. To help avoid that mistake in the future I filed #2721.

If it looks empty or sounds emtpy or tastes emtpy make it Empty!

## Jo Christian Oterhals: Perl 6 small stuff #17: a weekly challenge of Big Pi’s, Bags and modules

I didn’t have the time to solve last week’s challenge, but easter has started with slower days — so I decided to try my hand at the Perl Weekly Challenge number 4.

In my opinion, exercise 1 and 2 was not beginner vs advanced this time; both were peculiar. The first exercise was this:

Write a script to output the same number of PI digits as the size of your script. Say, if your script size is 10, it should print 3.141592653.

Thinking about this I thought that Perl 5 seemed to be easiest this time, as the Math::BigFloat package has the method bpi that returns PI to the given precision (i.e. bpi(10) returns 3.141592654). All I’d have to do was to figure out the file size of the script itself and return PI to that precision. I.e. a Perl 5 answer would look something like this:

#!/usr/bin/env perl
use v5.18;use Math::BigFloat 'bpi';
say bpi(-s $ARGV[0]);  I’m uncertain as to whether the size of the script means the number of characters in the script file, or whether it is the size of the script in bytes (in a unicode world those two aren’t necessarily identical). I choose to believe it’s the script size in bytes we’re talking about. Hadn’t it been for the fact that Perl6 do not have — as far as I know — an equivalent to bpi built-in, a Perl 6 answer must implement such a functionality. But: If I put that code into the script itself, the answer would probably be so long that it exceeded the numbers of digits of PI a bpi implementation like Perl 5’s could return. So this gave me an excellent opportunity to introduce modules as a part of the solution. Script: PWC004-01.p6Usage: perl6 -I. PWC-004-01.p6Output: 3.141592653589793238462643383279502884197169399375105820974945 #!/usr/bin/env perl6 use BigPI; say BigPI::pi$?FILE.IO.s;

A couple of fun things about this: $?FILE referes to the script file itself. If you prefer a more self explanatory variant,$*PROGRAM-NAME can be used instead of $?FILE. .IO.s returns the size of the file in bytes and, as mentioned above, seems to be what the exercise calls for. However, see note [1] below if you’d rather want the size to be the number of characters instead. And if you’ve mixed unicode into your Perl 5 script, see [2] for some Perl 5 specific notes. Anyway, the module referenced here is stored in a separate file, and looks like this. Module: BigPI.pm6 # Place in script directory and use perl6 with -I flag# i.e. perl6 -I. <calling script> unit module BigPI; # This definition of PI is borrowed from Perl 5's Math::BigFloatconstant PI = join '', qw:to/END/;314159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778053217122680661300192787661119590921642019893809525720106548586327886593615338182796823030195END our sub pi(Int$precision where * <= PI.chars = 10) {  return "3." ~ PI.substr(1, $precision - 2) ~ round(PI.substr($precision - 1, 2) / 10);}

The first line, unit module BigPI;, tells the interpreter that everything that follows is a part of this single module. If I had wanted to put several modules into one file, I’d define them with module NAME { …content… }.

To avoid collision with the built-in pi, I used the our sub statement. This means that you have to refer to the method using the full name, i.e. BigPi::pi. You could have chosen to export pi instead (is export), which would have let you refer to pi directly. But I prefer the other version to avoid name space collisions.

This tiny module also showcases a couple of other Perl 6 specific things. One, I use Perl 6’s built-in mechanics for defining constraints within the sub routine signature. In this case I tell the Perl 6 compiler to not allow precision higher than the number of digits in the PI constant (where * <= PI.chars). If you call the method with a higher precision, an error will be thrown. In addition I define a default precision of 10 should you want to call BPI::pi without an argument (=10 in the end).

The second exercise was this:

You are given a file containing a list of words (case insensitive 1 word per line) and a list of letters. Print each word from the file than can be made using only letters from the list. You can use each letter only once (though there can be duplicates and you can use each of them once), you don’t have to use all the letters.

I have to admit I had to read this several time before understanding it, and I have a feeling that I may have even now. But I read it like this: Every word in the file is to be checked against a list of letters. That list can contain duplicates. If the word “abba” is checked against a list <a a a a b b b c d d e>, two a’s and two b’s will be removed from the list, so that what’s left is <a a b c d d e> before checking that against the next word. If the next word is “be”, the list is reduced to <a a c d d>. Had the word been bee, however, there would be no match and the list would still be <a a b c d d e> when checking the next word.

#!/usr/bin/env perl6
my $letters = ( gather for 1..500 { take ('a'..'z').pick } ).BagHash; say ([+]$letters.values) ~ " letters matches...";
for "random-2000.dict".IO.lines.sort({ rand }) -> $word { my$wbh = $word.lc.comb.BagHash; if ($wbh (<=) $letters) {$letters = $letters (-)$wbh;    say "\t" ~ $word; }} say ([+]$letters.values) ~ " letters remain.";

This script assumes there a file called “random-2000.dict” in the working directory, containing a list of words (I have a list of 2000 random words in that file). It also assumes, perhaps naively, that the words are made of the letters a to z, i.e. standard english.

I start the script by generating a list of 500 characters, a random selection of a to z’s. You’ve seen me use gather and take before, so I won’t explain that here again. Rather I’d like to point out that the immediately, because I convert the list to a BagHash. A BagHash is a short and simple version of stuff you’ve programmed manually in Perl 5 before. The letters a to z are keys of this hash, and the value for each key is the number of occurences of that letter (if the value is zero, the key is removed).

I loop trough a randomly sorted list of words, and converts each word into a BagHash of its own. And now I can start to use some really useful infixes that works on BagHashes. In the statement if ($wbh (<=)$letters) the infix (<=) compares the word’s BagHash with the big bag of letters. If the word is contained by or equal to the bag of letters, we have a matching word and can print it to screen. At the same time I remove the letters of the word from the bag of letters by using the infix (-). So that a word <a b b a> removed from the list of letters <a a a a b b b c d d e> would reduce that list to <a a b c d d e> before testing the next word. The infix saves us from writing a lot of code here!

I’m glad the exercise stated that I didn’t have to use all of the letters, because I never seem to be able to. Every time I run this script a couple of hundred letters remains.

You should also note the use of the prefix [+] in this code. Used on a list/an array, it sums all of the elements of that list. It’s not strictly necessary to use it here; I use it only to output some information about the list of letters before and after the run.

Here is the output of the script when I run it:

500 letters matches  ingrainedly  regarding  trichinopoly  solenoidal  prophloem  beggary  paravertebral  tapinophobia  awarder  isocratic  mucic  tox  handwheel  suasible  skiptail  hippoboscid  colopexy  undeviating  onetime  stiped  housekeep  cocci  adjudgment  genys  Boche  fletch  nunky  l  huck  whuff  smug  hut  Lulu259 letters remain.

This was a great challenge. As simple as they may seem at first glance, they got me to use several Perl 6 specific techniques. It was a great way to “hone my skills” so to speak.

NOTES

[1] If size is means as “the size of the file in bytes”, “filename”.IO.s returns exactly what you want. But as mentioned above, files with Unicode characters in them can potentially have several times more bytes than characters.

A file containing a single character — a — is one byte long. So here the number of bytes and characters are equal. But another file containing this single character — 🕹 — is four bytes long.

So which is it? What represents “size” most correct? I’m not sure. I guess bytes is the safe bet. But should you want to count characters and use that number for PI precision, do this instead:

say BigPI::pi chars slurp $?FILE; There’s more than one way to do it. And interpret it I guess. [2] If you’re programming Perl 5, you’ve got similar issues. -s "filename" returns the size of a file in bytes while length($variable) returns the number of characters. Or so you should think. But consider this script:

#!/usr/bin/perluse v5.18;$a = "🕹";$b = "j";say length($a);say length($b);

One would think that both statements returns 1. But in this case the first say prints 4 and the second prints 1. It’s as if length returns bytes and not characters. Why?

Perl 5 has had some level of unicode support since v5.6 from March 2000 (though, I’m sorry to say, feels like an afterthought even in the latest development versions 19 years later). You can turn much of the unicode support on by, for instance, calling the perl interpreter with the -C flag. But you’ll have to tell the interpreter even more explicitly if your script file itself includes utf8 text (Perl 5 assumes no utf8 by default).

The above script does include utf8, so add a use utf8; at the start of the script to force perl to interpret utf8 characters as characters, not bytes. Now both say statements will print 1 as expected. Rather unexpectedly, perhaps, you even get the possibility to use utf8 letter characters in sub routine and scalar names when you’ve added the use statement in the beginning.

#!/usr/bin/perluse v5.18;use utf8;my $a = "🕹";my$b = "j";my $珠 = "perl utf8"; # Throws an error without 'use utf8'say length($a);say length($b);say$珠;

Not quite Perl 6 level yet, but it’s getting there.

## Jo Christian Oterhals: Perl 6 small stuff #16: All your base are belong to us

It’s the second week of the Perl Weekly Challenge, and like last week we’ve got two assignments — one “beginner” and one “advanced”.

The advanced assigment this time was: “Write a script that can convert integers to and from a base35 representation, using the characters 0–9 and A-Y.” Even though this is a blog mainly about Perl 6 I thought it’d be fun to start with my Perl 5 solutions to the advanced assigment, just so it’s even more easy to appreciate the simplicity of the Perl 6 solution… although not, as you will see, without some discussion.

PERL 5
# Convert from base35 to base10perl -E '%d = map { $_ =>$c++ } (0..9,A..Y); $i = 1; for (reverse(split("", @ARGV[0]))) {$e += $i *$d{$_};$i = $i * 35; } say$e' 1M5# Output: 2000
# Convert from base10 to base35perl -E '%d = map { $c++ =>$_ } (0..9,A..Y); while ($ARGV[0] > 0) { push @n,$d{$ARGV[0] % 35};$ARGV[0] = int($ARGV[0] / 35); } say join("", reverse(@n));' 2000# Output: 1M5 So these are working one-liners but hardly readable ones. They also violate a lot of best practices. So I expanded them into a full script that’s easier to reuse and understand with added strict and error handling as well as support for positive/negative (+and - prefixes). #!/usr/bin/env perl## Usage:# perl base35.pl [+-]NUMBER FROM-BASE, e.g.## perl base35.pl 1000 10# Output: SK## perl base35.pl SK 35# Output: 1000## perl base35.pl -SK 35# Output: -1000 use v5.18; say base35_conv(@ARGV); sub base35_conv { my ($no, $base) = (uc(shift), shift); if ($base != 10 && $base != 35) { warn "Not a valid base, must be 10 or 35"; return -1; } if (($base == 35 && $no !~ /^[\+\-]{0,1}[0-9A-Y]+$/) || ($base == 10 &&$no !~ /^[\+\-]{0,1}[0-9]+$/)) { warn "You have to provide a valid number for the given base"; return -1; } my ($c, $e) = (0, 0); my$prefix = $no =~ s/^(\+|-)// ?$1 : "";  my %d = map { if ($base == 35) {$_ => $c++ } else {$c++ => $_ } } (0..9,'A'..'Y'); if ($base == 35) {    my $i = 1; for (reverse(split("",$no))) {      $e +=$i * $d{$_};      $i =$i * 35;    }  }  else {    my @digits;    while ($no > 0) { push @digits,$d{$no % 35};$no = int($no / 35); }$e = join("", reverse(@digits));  }  return ( $prefix ?$prefix : "" ) . $e;} There’s really not much to comment about the code above. It works and is reasonably readable. It’s quite long, however, and that’s where Perl 6 comes in and destroys it. PERL 6 # Convert from base35 to base10perl6 -e 'say "1M5".parse-base(35)'# Output: 2000 # Convert from base10 to base35perl6 -e 'say 2000.base(35)'# Output: 1M5 At this you’re allowed to stop for a second an appreciate the simplicity of Perl 6. But: Since these are built-in functions in Perl 6 this wasn’t — in my opinion — the best Perl 6 assigment. I guess the point of the assigment is to write a solution from scratch —had I solved the Perl 5 version of the assigment by using a ready-made CPAN module such as Math::Int2Base I’d feel that I cheated. Maybe that’s just me? As for the “beginner” assigment this time — “Write a script or one-liner to remove leading zeros from positive numbers” — my Perl 6 and Perl 5 solutions are identical: perl -E 'say "001000"*1;'perl6 -e 'say "001000"*1;'# Both output: 1000 Although the assignment wants a script that removes leading zeroes from positive numbers, this will just as easily remove them from negative numbers as well. These will also work on floating point numbers: perl -E 'say "-001000"*1;'perl6 -e 'say "-001000"*1;'# Both output: -1000 perl -E 'say "001000.234"*1;'perl6 -e 'say "001000.234"*1;'# Both output: 1000.234 You can take it one step further with Perl 6, though. Should you for some reason — and I’m not able to think of a good one to be honest — want to do the same on a fraction, this is the way to do it: perl6 -e 'say "003/4".Rat.nude.join("/");'# Output: 3/4 .nude returns a two-element list with the numerator and denominator so that we can choose how to represent it (a naive say "3/4"*1; would print 0.75 and would therefore not be a satisfying solution considering how the assignment is specified). So that’s it for now. It may sound a little silly to write this in a Perl 6 centric blog, but what made the assignment interesting this week was Perl 5. I look forward to next week’s assignment already. ## my Timotimo \this: Intermediate Progress Report: Heap Snapshots ### Published by Timo Paulssen on 2019-03-22T22:22:00 Hello dear readers, this time I don't have something finished to show off, but nevertheless I can tell you all about the work I've recently started. The very first post on this blog already had a little bit about the heap snapshot profiler. It was about introducing a binary format to the heap snapshot profiler so that snapshots can be written out immediately after they were taken and by moarvm itself rather than by NQP code. This also made it possible to load snapshot data faster: the files contain an index that allows a big chunk of data to be split up exactly in the middle and then read and decoded by two threads in parallel. The new format also resulted in much smaller output files, and of course reduced memory usage of turning on the profiler while running perl6 code. However, since it still captures one heap snapshot every time the GC runs, every ordinary program that runs for longer than a few minutes will accumulate quite an amount of data. Heap snapshot files (which are actually collections of multiple snapshots) can very easily outgrow a gigabyte. There would have to be another change. Photo by Waldemar Brandt / Unsplash ## Enter Compression The new format already contained the simplest thing that you could call compression. Instead of simply writing every record to the file as it comes, records that have smaller numbers would be stored with a shorter representation. This saved a lot of space already, but not nearly as much as off-the-shelf compression techniques would. There had to be another way to get compression than just coming up with my own compression scheme! Well, obviously I could have just implemented something that already exists. However, at the time I was discouraged by the specific requirements of the heap snapshot analyzer - the tool that reads the files to let the user interactively explore the data within it: • Reading the file has to be fast • Any snapshot in the file must be accessible equally quickly and in any order • In order to get decent speed out of the reading process, it had to be multithreadable. Normally, compression formats are not built to support easily seeking to any given spot in the uncompressed data. There was of course the possibility to compress each snapshot individually, but that would mean a whole snapshot could either only be read in with a single thread, or the compression would have to go through the whole blob and when the first splittable piece was decompressed, a thread could go off and parse it. I'm not entirely sure why I didn't go with that, perhaps I just didn't think of it back then. After all, it's already been more than a year, and my brain compresses data by forgetting stuff. Anyway, recently I decided I'd try a regular compression format for the new moarvm heap snapshot file format. There's already a Perl 6 module named Compress::Zlib, which I first wanted to use. Writing the data out from moarvm was trivial once I linked it to zlib. Just replace fopen with gzopen, fwrite with gzwrite, fclose with gzclose and you're almost done! The compression ratio wasn't too shabby either. When I mentioned this in the #moarvm channel on IRC, I was asked why I use zlib instead of zstd. After all, zstd usually (or always?) outperforms zlib in both compression/decompression speed and output size. The only answer I had for that was that I hadn't used the zstd C library yet, and there was not yet a Perl 6 module for it. Figuring out zstd didn't go as smoothly as zlib, not by a long shot. But first I'd like to explain how I solved the issue of reading the file with multiple threads. ## Restructuring the data In the current binary format, there are areas for different kinds of objects that occur once per snapshot. Those are collectables and references. On top of that there are objects that are shared across snapshots: Strings that are referenced from all the other kinds of objects (for example filenames, or descriptions for references like "hashtable entry"), static frames (made up of a name like "push", an ID, a filename, and a line number), and types (made up of a repr name like VMArray, P6opaque, or CStruct and a type name like BagHash or Regex). That resulted in a file format that has one object after the other in the given area. The heap snapshot analyzer itself then goes through these areas and splits the individual values apart, then shoves them off into a queue for another thread to actually store. Storage inside the heap analyzer consists of one array for each part of these objects. For example, there is one array of integers for all the descriptions and one array of integers for all the target collectables. The main benefit of that is not having to go through all the objects when the garbage collector runs. The new format on the other hand puts every value of each attribute in one long string before continuing with the next attribute. Here's how the data for static frames was laid out in the file in the previous version: "sframes", count, name 1, uuid 1, file 1, line 1, name 2, uuid 2, file 2, line 2, name 3, uuid 3, file 3, line 3, … index data  The count at the beginning tells us how many entries we should be expecting. For collectables, types, reprs, and static frames this gives us the exact number of bytes to look for, too, since every entry has the same size. References on the other hand have a simple "compression" applied to them, which doesn't allow us to just figure out the total size by knowing the count. To offset this, the total size lives at the end in a place that can easily be found by the parser. Strings are also variable in length, but there's only a few hundred of them usually. References take up the most space in total; having four to five times as many references as there are collectables is pretty normal. Here's how the same static frame data is laid out in the upcoming format: "names", length, zstd(name 1, name 2, name 3, …), index data, "uuids", length, zstd(uuid 1, uuid 2, uuid 3, …), index data, "files", length, zstd(file 1, file 2, file 3, …), index data, "lines", length, zstd(line 1, line 2, line 3, …), index data  As you can see, the values for each attribute now live in the same space. Each attribute blob is compressed individually, each has a little piece of index data at the end and a length field at the start. The length field is actually supposed to hold the total size of the compressed blob, but if the heap snapshot is being output to a destination that doesn't support seeking (moving back in the file and overwriting an earlier piece of data) we'll just leave it zeroed out and rely on zstd's format being able to tell when one blob ends. There are some benefits to this approach that I'd like to point out: • The compression algorithm can perhaps figure patterns out better, because more "random" data isn't interspersed to break up patterns. I have not actually tested this yet, though! • The individual attributes can be uncompressed and stored away in parallel, since there's one count shared between many attributes. Compare this to decoding 500 thousand collectables on one thread and 1.7 million on another. • Need to add another attribute for some reason? You can just put it next to the other blobs, give it a new name that the older versions don't know and they'll just skip it! The last point, in fact, will let me put some extra fun into the files. First of all, I currently start the files with a "plain text comment" that explains what kind of file it is and how it's structured internally. That way, if someone stumbles upon this file in fifty years, they can get started finding out the contents right away! On a perhaps more serious note, I'll put in summaries of each snapshot that MoarVM itself can just already generate while it's writing out the snapshot itself. Not only things like "total number of collectables", but also "top 25 types by size, top 25 types by count". That will actually make the heapanalyzer not need to touch the actual data until the user is interested in more specific data, like a "top 100" or "give me a thousand objects of type 'Hash'". On top of that, why not allow the user to "edit" heap snapshots, put some comments in before sharing it to others, or maybe "bookmarking" specific objects? All of these things will be easier to do with the new format - that's the hope at least! ## Did the compression work? I didn't actually have the patience to exhaustively measure all the details, but here's a rough ratio for comparison: One dataset I've got results in a 1.1 gigabytes big file with the current binary format, a 147 megabytes big file when using gzip and a 99 megabytes big file using zstd (at the maximum "regular" compression level - I haven't checked yet if the cpu usage isn't prohibitive for this, though). It seems like this is a viable way forward! Allowing capture to run 10x as long is a nice thing for sure. ## What comes next? The profiler view itself in Moarperf isn't done yet, of course. I may not put in more than the simplest of features if I start on the web frontend for the heap analyzer itself. On the other hand, there's another task that's been waiting: Packaging moarperf for simpler usage. Recently we got support for a relocatable perl6 binary merged into master. That should make it possible to create an AppImage of moarperf. A docker container should also be relatively easy to build. We'll see what my actual next steps will be - or will have been I suppose - when I post the next update! Thanks for reading and take care - Timo ## brrt to the future: Reverse Linear Scan Allocation is probably a good idea ### Published by Bart Wiegmans on 2019-03-21T22:52:00 Hi hackers! Today First of all, I want to thank everybody who gave such useful feedback on my last post. For instance, I found out that the similarity between the expression JIT IR and the Testarossa Trees IR is quite remarkable, and that they have a fix for the problem that is quite different from what I had in mind. Today I want to write something about register allocation, however. Register allocation is probably not my favorite problem, on account of being both messy and thankless. It is a messy problem because - aside from being NP-hard to solve optimally - hardware instruction sets and software ABI's introduce all sorts of annoying constraints. And it is a thankless problem because the case in which a good register allocator is useful - for instance, when there's lots of intermediate values used over a long stretch of code - are fairly rare. Much more common are the cases in which either there are trivially sufficient registers, or ABI constraints force a spill to memory anyway (e.g. when calling a function, almost all registers can be overwritten). So, on account of this being not my favorite problem, and also because I promised to implement optimizations in the register allocator, I've been researching if there is a way to do better. And what better place to look than one of the fastest dynamic language implementations arround, LuaJIT? So that's what I did, and this post is about what I learned from that. Truth be told, LuaJIT is not at all a learners' codebase (and I don't think it's author would claim this). It uses a rather terse style of C and lots and lots of preprocessor macros. I had somewhat gotten used to the style from hacking dynasm though, so that wasn't so bad. What was more surprising is that some of the steps in code generation that are distinct and separate in the MoarVM JIT - instruction selection, register allocation and emitting bytecode - were all blended together in LuaJIT. Over multiple backend architectures, too. And what's more - all these steps were done in reverse order - from the end of the program (trace) to the beginning. Now that's interesting... I have no intention of combining all phases of code generation like LuaJIT has. But processing the IR in reverse seems to have some interesting properties. To understand why that is, I'll first have to explain how linear scan allocation currently works in MoarVM, and is most commonly described: 1. First, the live ranges of program values are computed. Like the name indicates, these represent the range of the program code in which a value is both defined and may be used. Note that for the purpose of register allocation, the notion of a value shifts somewhat. In the expression DAG IR, a value is the result of a single computation. But for the purposes of register allocation, a value includes all its copies, as well as values computed from different conditional branches. This is necessary because when we actually start allocating registers, we need to know when a value is no longer in use (so we can reuse the register) and how long a value will remain in use - 2. Because a value may be computed from distinct conditional branches, it is necessary to compute the holes in the live ranges. Holes exists because if a value is defined in both sides of a conditional branch, the range will cover both the earlier (in code order) branch and the later branch - but from the start of the later branch to its definition that value doesn't actually exist. We need this information to prevent the register allocator from trying to spill-and-load a nonexistent value, for instance. 3. Only then can we allocate and assign the actual registers to instructions. Because we might have to spill values to memory, and because values now can have multiple definitions, this is a somewhat subtle problem. Also, we'll have to resolve all architecture specific register requirements in this step. In the MoarVM register allocator, there's a fourth step and a fifth step. The fourth step exists to ensure that instructions conform to x86 two-operand form (Rather than return the result of an instruction in a third register, x86 reuses one of the input registers as the output register. E.g. all operators are of the form a = op(a, b) rather than a = op(b, c). This saves on instruction encoding space). The fifth step inserts instructions that are introduced by the third step; this is done so that each instruction has a fixed address in the stream while the stream is being processed. Altogether this is quite a bit of complexity and work, even for what is arguably the simplest correct global register allocation algorithm. So when I started thinking of the reverse linear scan algorithm employed by LuaJIT, the advantages became clear: • In LuaJIT, the IR maintains its SSA form - there is only a single definition of a value. This means that when iterating in reverse order, computing the live range becomes trivial. When we first encounter a use of a value, then by definition that is the last use. And when we encounter a definition, that is the only and single definition, and we can release the register. So there's no need to compute the live range in advance of allocation. • Furthermore, rather than merging the values of multiple branches into the same live range, each value on either side becomes an individual live range. As a result, the live range of a value never has a hole, further simplifying code. • LuaJIT uses register hints to indicate which registers could best be picked for a specific value. This is often determined by how a value is used (e.g., the divisor in a div instruction must be in the rcx register). If the preferred register can't be allocated, the register allocator inserts code to move it to the right place where needed. Having hints can be expected to greatly reduce the need for such code. There are downsides as well, of course. Not knowing exactly how long a value will be live while processing it may cause the algorithm to make worse choices in which values to spill. But I don't think that's really a great concern, since figuring out the best possible value is practically impossible anyway, and the most commonly cited heuristic - evict the value that is live furthest in the future, because this will release a register over a longer range of code, reducing the chance that we'll need to evict again - is still available. (After all, we do always know the last use, even if we don't necessarily know the first definition). Altogether, I'm quite excited about this algorithm; I think it will be a real simplification over the current implementation. Whether that will work out remains to be seen of course. I'll let you know! ## brrt to the future: Something about IR optimization ### Published by Bart Wiegmans on 2019-03-17T13:23:00 Hi hackers! Today I want to write about optimizing IR in the MoarVM JIT, and also a little bit about IR design itself. One of the (major) design goals for the expression JIT was to have the ability to optimize code over the boundaries of individual MoarVM instructions. To enable this, the expression JIT first expands each VM instruction into a graph of lower-level operators. Optimization then means pattern-matching those graphs and replacing them with more efficient expressions. As a running example, consider the idx operator. This operator takes two inputs (base and element) and a constant parameter scale and computes base+element*scale. This represents one of the operands of an 'indexed load' instruction on x86, typically used to process arrays. Such instructions allow one instruction to be used for what would otherwise be two operations (computing an address and loading a value). However, if the element of the idx operator is a constant, we can replace it instead with the addr instruction, which just adds a constant to a pointer. This is an improvement over idx because we no longer need to load the value of element into a register. This saves both an instruction and valuable register space. Unfortunately this optimization introduces a bug. (Or, depending on your point of view, brings an existing bug out into the open). The expression JIT code generation process selects instructions for subtrees (tile) of the graph in a bottom-up fashion. These instructions represent the value computed or work performed by that subgraph. (For instance, a tree like (load (addr ? 8) 8) becomes mov ?, qword [?+8]; the question marks are filled in during register allocation). Because an instruction is always represents a tree, and because the graph is an arbitrary directed acyclic graph, the code generator projects that graph as a tree by visiting each operator node only once. So each value is computed once, and that computed value is reused by all later references. It is worth going into some detail into why the expression graph is not a tree. Aside from transformations that might be introduced by optimizations (e.g. common subexpression elimination), a template may introduce a value that has multiple references via the let: pseudo-operator. See for instance the following (simplified) template: (let: (($foo (load (local))))
(add $foo (sub$foo (const 1))))

In this case, both references to $foo point directly to the same load operator. Thus, the graph is not a tree. Another case in which this occurs is during linking of templates into the graph. The output of an instruction is used, if possible, directly as the input for another instruction. (This is the primary way that the expression JIT can get rid of unnecessary memory operations). But there can be multiple instructions that use a value, in which case an operator can have multiple references. Finally, instruction operands are inserted by the compiler and these can have multiple references as well. If each operator is visited only once during code generation, then this may introduce a problem when combined with another feature - conditional expressions. For instance, if two branches of a conditional expression both refer to the same value (represented by name$foo) then the code generator will only emit code to compute its value when it encounters the first reference. When the code generator encounters $foo for the second time in the other branch, no code will be emitted. This means that in the second branch,$foo will effectively have no defined value (because the code in the first branch is never executed), and wrong values or memory corruption is then the predictable result.

This bug has always existed for as long as the expression JIT has been under development, and in the past the solution has been not to write templates which have this problem. This is made a little easier by a feature the let: operator, in that it inserts a do operator which orders the values that are declared to be computed before the code that references them. So that this is in fact non-buggy:

(let: (($foo (load (local))) # code to compute$foo is emitted here
(if (...)
(add $foo (const 1)) #$foo is just a reference
(sub $foo (const 2)) # and here as well  The DO node is inserted for the LET operator. It ensures that the value of the LOAD node is computed before the reference in either branch Alternatively, if a value$foo is used in the condition of the if operator, you can also be sure that it is available in both sides of the condition.

All these methods rely on the programmer being able to predict when a value will be first referenced and hence evaluated. An optimizer breaks this by design. This means that if I want the JIT optimizer to be successful, my options are:

1. Fix the optimizer so as to not remove references that are critical for the correctness of the program
2. Modify the input tree so that such references are either copied or moved forward
3. Fix the code generator to emit code for a value, if it determines that an earlier reference is not available from the current block.
In other words, I first need to decide where this bug really belongs - in the optimizer, the code generator, or even the IR structure itself. The weakness of the expression IR is that expressions don't really impose a particular order. (This is unlike the spesh IR, which is instruction-based, and in which every instruction has a 'previous' and 'next' pointer). Thus, there really isn't a 'first' reference to a value, before the code generator introduces the concept. This is property is in fact quite handy for optimization (for instance, we can evaluate operands in whatever order is best, rather than being fixed by the input order) - so I'd really like to preserve it. But it also means that the property we're interested in - a value is computed before it is used in, in all possible code flow paths - isn't really expressible by the IR. And there is no obvious local invariant that can be maintained to ensure that this bug does not happen, so any correctness check may have to check the entire graph, which is quite impractical.

I hope this post explains why this is such a tricky problem! I have some ideas for how to get out of this, but I'll reserve those for a later post, since this one has gotten quite long enough. Until next time!

## my Timotimo \this: Always Wear Safety Equipment When Inline Scalaring

MoarVM just recently got a nice optimization merged into the master branch. It's called "partial escape analysis" and comes with a specific optimization named "scalar replacement". In simple terms, it allows objects whose lifetime can be proven to end very soon to be created on the stack instead of in the garbage collector managed heap. More than just that, the "partial" aspect of the escape analysis even allows that when the object can escape out of our grasp, but will not always do so.

Allocation on the stack is much cheaper than allocation in the garbage collected heap, because freeing data off of the stack is as easy as leaving the function behind.

Photo by A. Zuhri / Unsplash

Like a big portion of optimizations in MoarVM's run-time optimizer/specializer ("spesh"), this analysis and optimization usually relies on some prior inlining of code. That's where the pun in the post's title comes from.

This is a progress report for the profiler front-end, though. So the question I'd like to answer in this post is how the programmer would check if these optimizations are being utilized in their own code.

## A Full Overview

The first thing the user gets to see in the profiler's frontend is a big overview page summarizing lots of results from all aspects of the program's run. Thread creations, garbage collector runs, frame allocations made unnecessary by inlining, etc.

That page just got a new entry on it, that sums up all allocations and also shows how many allocations have been made unnecessary by scalar replacement, along with a percentage. Here's an example screenshot:

That on its own is really only interesting if you've run a program twice and maybe changed the code in between to compare how allocation/optimization behavior changed in between. However, there's also more detail to be found on the Allocations page:

## Per-Type and Per-Routine Allocations/Replacements

The "Allocations" tab already gave details on which types were allocated most, and which routines did most of the allocating for any given type. Now there is an additional column that gives the number of scalar replaced objects as well:

Here's a screenshot showing the details of the Num type expanded to display the routines:

## Spesh Performance Overview

One major part of Spesh is its "speculative" optimizations. They have the benefit of allowing optimizations even when something can't be proven. When some assumption is broken, a deoptimization happens, which is effectively a jump from inside an optimized bit of code back to the same position in the unoptimized code. There's also "on-stack-replacement", which is effectively a jump from inside unoptimized code to the same position in optimized code. The details are, of course, more complicated than that.

How can you find out which routines in your program (in libraries you're calling, or the "core setting" of all builtin classes and routines) are affected by deoptimization or by OSR? There is now an extra tab in the profiler frontend that gives you the numbers:

This page also has the first big attempt at putting hopefully helpful text directly next to the data. Below the table there's these sections:

### Specializer Performance

MoarVM comes with a dynamic code optimizer called "spesh". It makes your code faster by observing at run time which types are used where, which methods end up being called in certain situations where there are multiple potential candidates, and so on. This is called specialization, because it creates versions of the code that take shortcuts based on assumptions it made from the observed data.

### Deoptimization

Assumptions, however, are there to be broken. Sometimes the optimized and specialized code finds that an assumption no longer holds. Parts of the specialized code that detect this are called "guards". When a guard detects a mismatch, the running code has to be switched from the optimized code back to the unoptimized code. This is called a "deoptimization", or "deopt" for short.

Deopts are a natural part of a program's life, and at low numbers they usually aren't a problem. For example, code that reads data from a file would read from a buffer most of the time, but at some point the buffer would be exhausted and new data would have to be fetched from the filesystem. This could mean a deopt.

If, however, the profiler points out a large amount of deopts, there could be an optimization opportunity.

### On-Stack Replacement (OSR)

Regular optimization activates when a function is entered, but programs often have loops that run for a long time until the containing function is entered again.

On-Stack Replacement is used to handle cases like this. Every round of the loop in the unoptimized code will check if an optimized version can be entered. This has the additional effect that a deoptimization in such code can quickly lead back into optimized code.

Situations like these can cause high numbers of deopts along with high numbers of OSRs.

I'd be happy to hear your thoughts on how clear this text is to you, especially if you're not very knowledgeable about the topics discussed. Check github for the current version of this text - it should be at https://github.com/timo/moarperf/blob/master/frontend/profiler/components/SpeshOverview.jsx#L82 - and submit an issue to the github repo, or find me on IRC, on the perl6-users mailing list, on reddit, on mastodon/the fediverse, twitter, or where-ever.

## my Timotimo \this: These graphs are on Fire!

Photo by JERRY / Unsplash

Just as I experienced with this very blog post you're reading right now, knowing how to start may just be the hardest part of a great many things.

When you've just run your profile – which I'm planning to make easier in the future as well – and you're looking at the overview page, you're really getting an overview of the very broadest kind. How long did the program run? Did it spend a lot of time running the GC? Are many routines not optimized or jitted?

However, profiling is usually used to find the critical little piece of code that takes an extraordinary amount of time. This is where your optimization attempts should usually begin.

Until now, the overview page didn't mention any piece of code by name at all. This changed when I brought in a flame graph (well, icicle graph in this case).

Here's two screenshots to give you an idea what I'm talking about:

Clicking on a box in the flame graph will expand the node to be 100% the width of the full graph so you can inspect the descendant nodes more easily. Here I've selected the step function:

Selecting one of the nodes gives the name of the routine, the source code line and links to the node in the call graph explorer (the rightwards arrow button) and to the routine in the routine list. On top of that, the filename and line number below the routine name are clickable if they are from the core setting, and they take you right to the implementation file on github.

The next step is, of course, to also put flame graphs into the call graph explorer. I'm not entirely sure how to make it behave when navigating upwards to the root or downwards to the selected children, i.e. whether it should keep nodes towards the root or how many.

That's already everything for today. I couldn't invest terribly much time into moarperf this and last month, but I'll continue working :)

Have a good one, and thanks for reading!
- Timo

## brrt to the future: A short post about types and polymorphism

Hi all. I usually write somewhat long-winded posts, but today I'm going to try and make an exception. Today I want to talk about the expression template language used to map the high-level MoarVM instructions to low-level constructs that the JIT compiler can easily work with:

This 'language' was designed back in 2015 subject to three constraints:
• It should make it easy to develop 'templates' for MoarVM instructions, so we can map the ~800 or so different instructions supported by the interpreter to something the JIT compiler can work with.
• It should be simple to process and analyze; specifically, it should be suitable as input to the instruction selection process (the tiler).
• It should be simple to implement, both from the frontend (meaning the perl program that compiles a template file to a C header) and the backend (meaning the C code that combines templates into the IR that is compiled).
Recently I've been working on adding support for floating point operations, and  this means working on the type system of the expression language. Because floating point instructions operate on a distinct set of registers from integer instructions, a floating point operator is not interchangeable with an integer (or pointer) operator.

This type system is enforced in two ways. First, by the template compiler, which attempts to check if you've used all operands correctly. This operates during development, which is convenient. Second, by instruction selection, as there will simply not be any instructions available that have the wrong combinations of types. Unfortunately, that happens at runtime, and such errors so annoying to debug that it motivated the development of the first type checker.

However, this presents two problems. One of the advantages of the expression IR is that, by virtue of having a small number of operators, it is fairly easy to analyze. Having a distinct set of operators for each type would undo that. But more importantly, there are several MoarVM instructions that are generic, i.e. that operate on integer, floating point, and pointer values. (For example, the set, getlex and bindlex instructions are generic in this way). This makes it impossible to know whether its values will be integers, pointers, or floats.

This is no problem for the interpreter since it can treat values as bags-of-bits (i.e., it can simply copy the union MVMRegister type that holds all values of all supported types). But the expression JIT works differently - it assumes that it can place any value in a register, and that it can reorder and potentially skip storing them to memory. (This saves work when the value would soon be overwritten anyway). So we need to know what register class that is, and we need to have the correct operators to manipulate a value in the right register class.

To summarize, the problem is:
• We need to know the type of each value, both to ensure we use the correct instructions and the right registers.
• There are several cases in which we don't really know (for the template) what type each value has.
There are two ways around this, and I chose to use both. First, we know as a fact for each local or lexical value in a MoarVM frame (subroutine) what type it should have. So even a generic operator like set can be resolved to a specific type at runtime, at which point we can select the correct operators. Second, we can introduce generic operators of our own. This is possible so long as we can select the correct instruction for an operator based on the types of the operands.

For instance, the store operator takes two operands, an address and a value. Depending on the type of the value (reg or num), we can always select the correct instruction (mov or movsd). It is however not possible to select different instructions for the load operator based on the type required, because instruction selection works from the bottom up. So we need a special load_num operator, but a store_num operator is not necessary. And this is true for a lot more operators than I had initially thought. For instance, aside from the (naturally generic) do and if operators, all arithmetic operators and comparison operators are generic in this way.

I realize that, despite my best efforts, this has become a rather long-winded post anyway.....

Anyway. For the next week, I'll be taking a slight detour, and I aim to generalize the two-operand form conversion that is necessary on x86. I'll try to write a blog about it as well, and maybe it'll be short and to the point. See you later!

## brrt to the future: New years post

Hi everybody! I recently read jnthns Perl 6 new years resolutions post, and I realized that this was an excellent example to emulate. So here I will attempt to share what I've been doing in 2018 and what I'll be doing in 2019.

In 2018, aside from the usual refactoring, bugfixing and the like:
• I added support for the fork() system call in the MoarVM backend.
• I removed the 'invokish' control flow mechanism and replaced it with controlled return address modification.
• I requested a grant from the Perl Foundation aiming to complete the expression JIT compiler with floating point support, irregular registers and the like.
So 2019 starts with me trying to complete the goals specified in that grant request. I've already partially completed one goal (as explained in the interim report) - ensuring that register encoding works correctly for SSE registers in DynASM. Next up is actually ensuring support for SSE (and floating point) registers in the JIT, which is surprisingly tricky, because it means introducing a type system where there wasn't really one previously. I will have more to report on that in the near future.

After that, the first thing on my list is the support for irregular register requirements in the register allocator, which should open up the possibility of supporting various instructions.

I guess that's all for now. Speak you later!

## 6guts: My Perl 6 wishes for 2019

This evening, I enjoyed the New Year’s fireworks display over the beautiful Prague skyline. Well, the bit of it that was between me and the fireworks, anyway. Rather than having its fireworks display at midnight, Prague puts it at 6pm on New Year’s Day. That makes it easy for families to go to, which is rather thoughtful. It’s also, for those of us with plans to dig back into work the following day, a nice end to the festive break.

So, tomorrow I’ll be digging back into work, which of late has involved a lot of Perl 6. Having spent so many years working on Perl 6 compiler and runtime design and implementation, it’s been fun to spend a good amount of 2018 using Perl 6 for commercial projects. I’m hopeful that will continue into 2019. Of course, I’ll be continuing to work on plenty of Perl 6 things that are for the good of all Perl 6 users too. In this post, I’d like to share some of the things I’m hoping to work on or achieve during 2019.

### Partial Escape Analysis and related optimizations in MoarVM

The MoarVM specializer learned plenty of new tricks this year, delivering some nice speedups for many Perl 6 programs. Many of my performance improvement hopes for 2019 center around escape analysis and optimizations stemming from it.

The idea is to analyze object allocations, and find pieces of the program where we can fully understand all of the references that exist to the object. The points at which we can cease to do that is where an object escapes. In the best cases, an object never escapes; in other cases, there are a number of reads and writes performed to its attributes up until its escape.

Armed with this, we can perform scalar replacement, which involves placing the attributes of the object into local registers up until the escape point, if any. As well as reducing memory operations, this means we can often prove significantly more program properties, allowing further optimization (such as getting rid of dynamic type checks). In some cases, we might never need to allocate the object at all; this should be a big win for Perl 6, which by its design creates lots of short-lived objects.

There will be various code-generation and static optimizer improvements to be done in Rakudo in support of this work also, which should result in its own set of speedups.

### Decreasing startup time and base memory use

The current Rakudo startup time is quite high. I’d really like to see it fall to around half of what it currently is during 2019. I’ve got some concrete ideas on how that can be achieved, including changing the way we store and deserialize NFAs used by the parser, and perhaps also dealing with the way we currently handle method caches to have less startup impact.

Both of these should also decrease the base memory use, which is also a good bit higher than I wish.

### Improving compilation times

Some folks – myself included – are developing increasingly large applications in Perl 6. For the current major project I’m working on, runtime performance is not an issue by now, but I certainly feel myself waiting a bit on compiles. Part of it is parse performance, and I’d like to look at that; in doing so, I’d also be able to speed up handling of all Perl 6 grammars.

I suspect there are some good wins to be had elsewhere in the compilation pipeline too, and the startup time improvements described above should also help, especially when we pre-compile deep dependency trees. I’d also like to look into if we can do some speculative parallel compilation.

### Research into concurrency safety

In Perl 6.d, we got non-blocking await and react support, which greatly improved the scalability of Perl 6 concurrent and parallel programs. Now many thousands of outstanding tasks can be juggled across just a handful of threads (the exact number chosen according to demand and CPU count).

For Perl 6.e, which is still a good way off, I’d like to having something to offer in terms of making Perl 6 concurrent and parallel programming safer. While we have a number of higher-level constructs that eliminate various ways to make mistakes, it’s still possible to get into trouble and have races when using them.

So, I plan to spend some time this year quietly exploring and prototyping in this space. Obviously, I want something that fits in with the Perl 6 language design, and that catches real and interesting bugs – probably by making things that are liable to occasionally explode in weird ways instead reliably do so in helpful ways, such that they show up reliably in tests.

### Get Cro to its 1.0 release

In the 16 months since I revealed it, Cro has become a popular choice for implementing HTTP APIs and web applications in Perl 6. It has also attracted code contributions from a couple of dozen contributors. This year, I aim to see Cro through to its 1.0 release. That will include (to save you following the roadmap link):

• Providing flexible HTTP reverse proxying support
• Providing a number of robustness patterns (timeout, retry, circuit breaker, and so forth)
• Providing Cro support some message queue protocols
• Ensuring Cro’s features work on MacOS and Windows
• Providing some further help for those building server-side web applications (templating, an easier time setting up common login flows, etc.)
• Conducting a security review
• Specifying a stability/deprecation policy
• Lots of polishing

### Comma Community, and lots of improvements and features

I founded Comma IDE in order to bring Perl 6 a powerful Integrated Development Environment. We’ve come a long way since the Minimum Viable Product we shipped back in June to the first subscribers to the Comma Supporter Program. In recent months, I’ve used Comma almost daily on my various Perl 6 projects, and by this point honestly wouldn’t want to be without it. Like Cro, I built Comma because it’s a product I wanted to use, which I think is a good place to be in when building any product.

In a few months time, we expect to start offering Comma Community and Comma Complete. The former will be free of charge, and the latter a commercial offering under a subscribe-for-updates model (just like how the supporter program has worked so far). My own Comma wishlist is lengthy enough to keep us busy for a lot more than the next year, and that’s before considering things Comma users are asking for. Expect plenty of exciting new features, as well as ongoing tweaks to make each release feel that little bit nicer to use.

### Speaking, conferences, workshops, etc.

This year will see me giving my first keynote at a European Perl Conference. I’m looking forward to being in Riga again; it’s a lovely city to wander around, and I remember having some pretty nice food there too. The keynote will focus on the concurrent and parallel aspects of Perl 6; thankfully, I’ve still a good six months to figure out exactly what angle I wish to take on that, having spoken on the topic many times before!

I also plan to submit a talk or two for the German Perl Workshop, and will probably find the Swiss Perl Workshop hard to resist attending once more. And, more locally, I’d like to try and track down other Perl folks here in Prague, and see if I can help some kind of Praha.pm to happen again.

I need to keep my travel down to sensible levels, but might be able to fit in the odd other bit of speaking during the year, if it’s not too far away.

### Teaching

While I want to spend most of my time building stuff rather than talking about it, I’m up for the occasional bit of teaching. I’m considering pitching a 1-day Perl 6 concurrency workshop to the Riga organizers. Then we’ll see if there’s enough folks interested in taking it. It’ll cost something, but probably much less than any other way of getting a day of teaching from me. :-)

### So, down to work!

Well, a good night’s sleep first. :-) But tomorrow, another year of fun begins. I’m looking forward to it, and to working alongside many wonderful folks in the Perl community.

## Perl 6 Advent Calendar: Day 25 – Calling Numbers Names

This school semester I took my first proof-based class titled “Intro to Mathematical Proof Workshop”. After having taken other math classes (Calculus, Matrix Algebra, etc.), I felt that I didn’t have that much of a mathematical foundation and up to this point, all I had been doing was purely computational mathematics sprinkled with some proofs here and there. Looking back, I found the class quite enjoyable and learning about different theorems and their proofs, mostly from number theory, has given me a new perspective of mathematics.

“How is this related to Perl 6?”, you might be asking. As I mentioned, most of the proofs that were discussed either in class or left for homework were related to number theory. If there’s one thing Perl 6 and number theory have in common is their accessibility. Similar to how the content of the elementary theory of numbers can be tangible and familiar, Perl 6 can be quite approachable to beginners. In fact, beginners are encouraged to write what’s known as “baby Perl”.

Another thing they seem to share is their vastness. For example, in Perl 6 one can find many operators while in number theory one can find a plethora of different types of numbers from even numbers to cute numbers. For most purposes, these numbers are easy to understand and if one has the definition of a number, then it’s quite easy to check if a given integer follows in that category. For example, a prime number is formally defined as follows:

An integer p > 1 is called a prime number, or simply a prime, if its only positive divisors are 1 and p. Otherwise, the integer p is termed composite.

By using this definition, we can quite simply figure out if a certain number is a prime. For example, among the first ten positive integers, 2, 3, 5 and 7 are primes. This is quite trivial for small numbers but doing it by hand with larger numbers would become tedious in no time. This is where Perl 6 comes into the picture. Perl 6 offers many constructs/features that even if they don’t make the job easy, they certainly simplify it. For instance, with the definition of a prime in mind, we could easily implement an algorithm that tests for primality in Perl 6:

sub isPrime( $number ) { return$number > 1 if $number ≤ 3; loop (my$i = 2; $i² ≤$number; $i++) { return False if$number %% $i; } return True; }  Please bear in mind that this is not about writing performant code. If the code turns down that way, then it would be excellent but it is not the goal. My aim is to showcase the easiness with which a beginner can express mathematical constructs in Perl 6. It’s worth mentioning that Perl 6 already includes the subroutine (or method) is-prime that tests for primality. However, although this is true for prime numbers, it might not be the case for another type of number you might come across such as a factorial, a factorion or even a Catalan number. And in cases like this, Perl 6 will be helpful. After learning about different types of numbers, I set out to look for some peculiar numbers and see how I could implement them using Perl 6. In the process, I found this useful website that lists a bunch of numbers, their definitions and some examples. From all of these, I’ve chosen four types of numbers that aren’t stupidly difficult to implement (I still write baby Perl ) while being enough to illustrate some Perl 6 constructs. On the other hand, I’ve avoided those which might be too straightforward. Let us start with… ## Amicable numbers Amicable numbers are pairs of numbers, also known as friendly numbers, each of whose aliquot parts add to give the other number. sub aliquot-parts($number ) {
(^$number).grep:$number %% *;
}

sub infix:<amic>( $m,$n ) {
$m == aliquot-parts($n).sum &&
$n == aliquot-parts($m).sum;
}

say 12 amic 28;   # False, 12 and 28 aren't amicables.
say 220 amic 284; # True, 220 and 284 are though.


A number’s aliquot parts are its factors excluding the number itself. To find the aliquot parts of a number, I’ve a created the subroutine aliquot-parts which uses 1..^$number to create a list of numbers from 1 up to $numbers (exclusive). This list is subsequently grepped to find out those numbers in the list that evenly divide $number. In this snippet it’s achieved by using the infix operator %% which returns True if a first operand is divisible by a second operand. Otherwise, it returns False. The second operand stands for any number in the list aforementioned so I’ve used *, which in this context is known as the whatever star and creates a closure over the expression $number %% *. Thus the whole expression in the subroutine is equivalent to (^$number).grep: {$number %% $_ };. At the end, the subroutine returns a list of factors of $number excluding $number itself. To find out if two numbers are amicable, we could have used just a subroutine. However, Perl 6 allows for the creation of new operators, which are just subroutines with funny names themselves, and I’ve done just that. I created the infix operator (meaning between two operands) amic that returns True if two numbers are amicable. Otherwise, False. As you can see, the syntax to create a new operator is straightforward: the keyword sub, followed by the type of the operator (prefix, infix, postfix, etc.), the name of the operator inside quoting constructs, the expected parameters and a code block. ## Factorion A factorion is a natural number that equals the sum of the factorials of its digits in a given base. subset Whole of Int where * ≥ 0; sub postfix:<!>( Whole$N --> Whole ) {
[*] 1..$N; } sub is-factorion( Whole$number --> Bool ) {
$number ==$number.comb.map({ Int($_)! }).sum } say is-factorion(25); # False say is-factorion(145); # True Recall that a factorial of a number N, which is usually denoted by N!, is the product 1 x 2 x ... x N. For example, 3! = 1 x 2 x 3 = 6. In the code snippet, I created the postfix operator ! to return the factorial of an integer operand. Thus say 3!; will work just fine in the code snippet and prints 6. How the factorial is calculated is straightforward: The range 1..$N creates a list of numbers from 1 to $N (inclusive) and then I use [...], which is known as the reduction meta-operator, with the operator * to reduce the created list to 1 x 2 x ...$N which effectively gives me the factorial of $N. There are many operators in Perl 6 and the meta-operator [...] can work with many of them. As for the factorion, I want to know if a number is a factorion so I created a subroutine that takes an integer and returns a Boolean. Perl 6 is gradually typed so it allows to type variables explicitly, specify the return type of a sub, etc. I’ve decided to type the subroutines’ parameters and the subroutine’s return type. In the section about the amicable numbers, I was quite liberal regarding the subroutines’ arguments. However, here I’ve decided to comply with the definition of a factorial and only allow for whole numbers, hence the definition and use of the Whole type. In Perl 6, the operator subset declares a new type using a base type. However if I hadn’t used the where clause, I’d have ended up with just another name for the Int type which would be redundant. So I used the where clause to constraint the type of any assignment to the desired input. In this case, the assignment to a variable of type Whole. With the is-factorion sub, I used the method comb to break up $number into its digits and then use map to find their respective factorials and sum them up. The sub returns True if $number is equal to the sum of the factorials of its digits. It returns False otherwise. ## Cyclic Numbers A cyclic number is a number with N digits, which, when multiplied by 1, 2, 3, ..., N produces the same digits in a different order. sub is-cyclic( Int$n --> Bool ) {
for 1..$n.chars ->$d {
return False if $n.comb.Bag != ($n * $d).comb.Bag; } return True; } say is-cyclic(142857); # True say is-cyclic(95678); # False Here I created the subroutine is-cyclic that takes an integer and returns a Boolean value. I use a for loop to iterate over the places of the number’s digits (1st, 2nd, etc.) and use them to multiply the number in each iteration. Afterward I use the previously seen comb method followed by the Bag method. In Perl 6, a Bag is an immutable collection of distinct elements in no particular order where each element is weighted by the number of copies in the collection. This is the kind of structure I need since only the number’s digits and their amounts are important, not their order and a Bag accomplishes exactly this. The subroutine returns False if the bags don’t have the same numbers or have the same numbers but are weighted differently. Otherwise, True is returned indicating the number’s cyclic-ness. ## Happy numbers A happy number is defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits in base-ten, and repeat the process until the number either equals 1 (where it will stay), or it loops endlessly in a cycle that does not include 1. sub is-happy($n is copy ) {
my $seen-numbers = :{}; while$n > 1 {
return False if $n ∈$seen-numbers;
$seen-numbers{$n} = True;
$n =$n.comb.map(*²).sum
}
return True;
}

say is-happy(7);     # True
say is-happy(2018);  # False

After going through the process described in the definition, a happy number ends being equal to 1. On the other hand, a non-happy number follows a sequence that reaches the cycle 4, 16, 37, 58, 89, 145, 42, 20, 4,… which excludes 1. Armed with this fact, I created the hash $seen-numbers to keep track of such numbers. As illustrated by the while loop, the process is repeated over and over while $n is greater than 1 or until a number has been seen. Here the line that stands out is the one containing the symbol ∈. In set theory, if an element p is a member (or element) of a set A, then it’s denoted by p ∈ A and this exactly what’s being tested here. If the number $n is an element of the hash, then the sub returns False. Otherwise, it returns True which indicates the number’s happiness. ## Summary In this post, I slightly went over gradual typing, how to define a new operator, sub-classing by using the subset keyword and the set and bag data structures. As you may have realized, Perl 6 offers many constructs that facilitate many different tasks. In this instance, it was my desire to express definitions of numbers in a more programmatic way. Yours could be totally different but you can rest assured that Perl 6 is there to make the job easier and immensely more enjoyable. Well…that’s all folks! Happy Christmas and a wonder-ofun New Year! ## Perl 6 Advent Calendar: Day 24 – Topic Modeling with Perl 6 ### Published by titsuki on 2018-12-24T00:01:47 Hi, everyone. Today, let me introduce Algorithm::LDA. This module is a Latent Dirichlet Allocation (i.e., LDA) implementation for topic modeling. # Introduction What’s LDA? LDA is one of the popular unsupervised machine learning methods. It models document generation process and represents each document as a mixture of topics. So, what does “a mixture of topics” mean? Fig. 1 shows an article in which some of the words are highlighted in three colors: yellow, pink, and blue. Words about genetics are marked in yellow; words about evolutionary biology are marked in pink; words about data analysis are marked in blue. Imagine all of the words in this article are colored, then we can represent this article as a mixture of topics (i.e., colors). Fig. 1: (This image is from “Probabilistic topic models.” (David Blei 2012)) OK, then I’ll demonstrate how to use Algorithm::LDA in the next section. # Modeling Quotations In this article, we explore Wikiquote. Wikiquote is a cloud-sourced platform providing sourced quotations. By using Wikiquote API, we get quotations that are used for LDA estimation. After that, we execute LDA and plot the result. Finally, we create an information retrieval application using the resulting model. ## Preliminary ### Wikiquote API Wikiquote has action API that provides means for getting Wikiquote resources. For example, you can get content of the Main Page as follows: $ curl "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&titles=Main%20Page&rvprop=content&format=json"


The result of the above command is:

{"batchcomplete":"","warnings":{"main":{"*":"Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes. Use [[Special:ApiFeatureUsage]] to see usage of deprecated features by your application."},"revisions":{"*":"Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."}},"query":{"pages":{"1":{"pageid":1,"ns":0,"title":"Main Page","revisions":[{"contentformat":"text/x-wiki","contentmodel":"wikitext","*":"\n{{Main page header}}\n{{Main Page Quote of the day}}\n</div>\n\n \n{{Main Page Selected pages}}\n{{Main categories}}\n\n\n \n{{New pages}}\n{{Main Page Community}}\n\n\n\n==Wikiquote's sister projects==\n{{otherwiki}}\n\n==Wikiquote languages==\n{{Wikiquotelang}}\n\n__NOTOC__ __NOEDITSECTION__\n{{noexternallanglinks:ang|simple}}\n[[Category:Main page]]"}]}}}}


### WWW

WWW by Zoffix Znet is a library which provides easy-to-use API for fetching and parsing json very simply.
For instance, as the README says, you can easily get content by jget(URL)<HASHKEY> style:

say jget('https://httpbin.org/get?foo=42&bar=x')<args><foo>;

To install WWW:

$zef install WWW ### Chart::Gnuplot Chart::Gnuplot by titsuki is a bindings for gnuplot. To install Chart::Gnuplot: $ zef install Chart::Gnuplot

In this article, we use this module; however, if you unfamiliar with gnuplot there are many alternatives: SVG::Plot, Graphics::PLplot, Call matplotlib functions via Inline::Python.

### Stopwords from NLTK

NLTK is a toolkit for natural language processing.
Not only APIs, it also provides corpus.
You can get stopwords for English via “70. Stopwords Corpus”: http://www.nltk.org/nltk_data/

## Exercise 1: Get Quotations and Create Cleaned Documents

At the beginning, we have to get quotations from Wikiquote and create clean documents.

The main goal of this section is to create documents according to the following format:

<docid> <personid> <word> <word> <word> ...
<docid> <personid> <word> <word> <word> ...
<docid> <personid> <word> <word> <word> ...


The whole source code is:

use v6.c;
use WWW;
use URI::Escape;

sub get-members-from-category(Str $category --> List) { my$member-url = "https://en.wikiquote.org/w/api.php?action=query&list=categorymembers&cmtitle={$category}&cmlimit=100&format=json"; @(jget($member-url)<query><categorymembers>.map(*<title>));
}

sub get-pages(Str @members, Int $batch = 50 --> List) { my Int$start = 0;
my @pages;
while $start < @members { my$list = @members[$start..^List($start + $batch, +@members).min].map({ uri_escape($_) }).join('%7C');
my $url = "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&formatversion=2&titles={$list}";
@pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });
$start +=$batch;
}
@pages;
}

sub create-documents-from-pages(@pages, @members --> List) {
my @documents;
for @pages -> $page { my @quotations =$page<body>.split("\n")\
.map(*.subst(/$\[<text>=(<-[\[$|]>+?)\|$<link>=(<-[|]>+?)\]\]/, {$<text> }, :g))\
.map(*.subst(/$\[<text>=(<-[\[$|]>+?)\]\]/, { $<text> }, :g))\ .map(*.subst("[", "[", :g))\ .map(*.subst("]", "]", :g))\ .map(*.subst("&amp;", "&", :g))\ .map(*.subst("&nbsp;", "", :g))\ .map(*.subst(/:i [ \<\/?\s?br\> | \<br\s?\/?\> ]/, " ", :g))\ .grep(/^\*<-[*]>/)\ .map(*.subst(/^\*\s+/, "")); # Note: The order of array wikiquote API returned is agnostic. my Int$index = @members.pairs.grep({ .value eq $page<title> }).map(*.key).head; @documents.push(%(body =>$_, personid => $index)) for @quotations; } @documents.sort({$^a<personid> <=> $^b<personid> }).pairs.map({ %(docid => .key, personid => .value<personid>, body => .value<body>) }).list } my Str @members = get-members-from-category("Category:1954_births"); my @pages = get-pages(@members); my @documents = create-documents-from-pages(@pages, @members); my$docfh = open "documents.txt", :w;
$docfh.say((.<docid>, .<personid>, .<body>).join(" ")) for @documents;$docfh.close;

my $memfh = open "members.txt", :w;$memfh.say($_) for @members;$memfh.close;

First, we get the members listed in the “Category:1954_births” page. I choosed the year that the Perl 6 designer was born in:

my Str @members = get-members-from-category("Category:1954_births");

where get-members-from-category gets members via Wikiquote API:

sub get-members-from-category(Str $category --> List) { my$member-url = "https://en.wikiquote.org/w/api.php?action=query&list=categorymembers&cmtitle={$category}&cmlimit=100&format=json"; @(jget($member-url)<query><categorymembers>.map(*<title>));
}

Next, call get-pages:

my @pages = get-pages(@members);

get-pages is a subroutine that gets pages of the given titles (i.e., members):

sub get-pages(Str @members, Int $batch = 50 --> List) { my Int$start = 0;
my @pages;
while $start < @members { my$list = @members[$start..^List($start + $batch, +@members).min].map({ uri_escape($_) }).join('%7C');
my $url = "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&formatversion=2&titles={$list}";
@pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });
$start +=$batch;
}
@pages;
}

where @members[$start..^List($start + $batch, +@members).min] is a slice of length $batch, and the elements of the slice are percent encoded by uri_escase and joint by %7C (i.e., percent encoded pipe symbol).
In this case, one of the resulting $list is: Mumia%20Abu-Jamal%7CRene%20Balcer%7CIain%20Banks%7CGerard%20Batten%7CChristie%20Brinkley%7CJames%20Cameron%20%28director%29%7CEugene%20Chadbourne%7CJackie%20Chan%7CChang%20Yu-hern%7CLee%20Child%7CHugo%20Ch%C3%A1vez%7CDon%20Coscarelli%7CElvis%20Costello%7CDaayiee%20Abdullah%7CThomas%20H.%20Davenport%7CGerardine%20DeSanctis%7CAl%20Di%20Meola%7CKevin%20Dockery%20%28author%29%7CJohn%20Doe%20%28musician%29%7CF.%20J.%20Duarte%7CIain%20Duncan%20Smith%7CHerm%20Edwards%7CAbdel%20Fattah%20el-Sisi%7CRob%20Enderle%7CRecep%20Tayyip%20Erdo%C4%9Fan%7CAlejandro%20Pe%C3%B1a%20Esclusa%7CHarvey%20Fierstein%7CCarly%20Fiorina%7CGary%20L.%20Francione%7CAshrita%20Furman%7CMary%20Gaitskill%7CGeorge%20Galloway%7C%C5%BDeljko%20Glasnovi%C4%87%7CGary%20Hamel%7CFran%C3%A7ois%20Hollande%7CKazuo%20Ishiguro%7CJean-Claude%20Juncker%7CAnish%20Kapoor%7CGuy%20Kawasaki%7CRobert%20Francis%20Kennedy%2C%20Jr.%7CLawrence%20M.%20Krauss%7CAnatoly%20Kudryavitsky%7CAnne%20Lamott%7CJoep%20Lange%7CAng%20Lee%7CLi%20Bin%7CRay%20Liotta%7CPeter%20Lipton%7CJames%20D.%20Macdonald%7CKen%20MacLeod  Note that get-pages subroutine uses hash contextualizer %() for creating a sequence of hash: @pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) }); After that, we call create-documents-from-pages: my @documents = create-documents-from-pages(@pages, @members); create-documents-from-pages creates documents from each page: sub create-documents-from-pages(@pages, @members --> List) { my @documents; for @pages ->$page {
my @quotations = $page<body>.split("\n")\ .map(*.subst(/$\[<text>=(<-[\[$|]>+?)\|$<link>=(<-[|]>+?)\]\]/, { $<text> }, :g))\ .map(*.subst(/$\[<text>=(<-[\[$|]>+?)\]\]/, {$<text> }, :g))\
.map(*.subst("[", "[", :g))\
.map(*.subst("]", "]", :g))\
.map(*.subst("&amp;", "&", :g))\
.map(*.subst("&nbsp;", "", :g))\
.map(*.subst(/:i [ \<\/?\s?br\> | \<br\s?\/?\> ]/, " ", :g))\
.grep(/^\*<-[*]>/)\
.map(*.subst(/^\*\s+/, ""));

# Note: The order of array wikiquote API returned is agnostic.
my Int $index = @members.pairs.grep({ .value eq$page<title> }).map(*.key).head;
@documents.push(%(body => $_, personid =>$index)) for @quotations;
}
@documents.sort({ $^a<personid> <=>$^b<personid> }).pairs.map({ %(docid => .key, personid => .value<personid>, body => .value<body>) }).list
}


where .map(*.subst(/$\[<text>=(<-[\[$|]>+?)\|$<link>=(<-[|]>+?)\]\]/, {$<text> }, :g)) and .map(*.subst(/$\[<text>=(<-[\[$|]>+?)\]\]/, { $<text> }, :g)) are coverting commands that extract texts for displaying and delete texts for internal linking from anchor texts. For example, [[Perl]] is reduced into Perl. For more syntax info, see: https://docs.perl6.org/language/regexes#Named_captures or https://docs.perl6.org/routine/subst After some cleaning operations (.e.g., .map(*.subst("[", "[", :g))), we extract quotation lines. .grep(/^\*<-[*]>/) finds lines starting with single asterisk because most of the quotations appear in such kind of lines. Next, .map(*.subst(/^\*\s+/, "")) deletes each asterisk since asterisk itself isn’t a constituent of each quotation. Finally, we save the documents and members (i.e., titles): my$docfh = open "documents.txt", :w;
$docfh.say((.<docid>, .<personid>, .<body>).join(" ")) for @documents;$docfh.close;

my $memfh = open "members.txt", :w;$memfh.say($_) for @members;$memfh.close;

## Exercise 2: Execute LDA and Visualize the Result

In the previous section, we saved the cleaned documents.
In this section, we use the documents for LDA estimation and visualize the result.
The goal of this section is to plot a document-topic distribution and write a topic-word table.

The whole source code is:

use v6.c;
use Algorithm::LDA;
use Algorithm::LDA::Formatter;
use Algorithm::LDA::LDAModel;
use Chart::Gnuplot;
use Chart::Gnuplot::Subset;

sub create-model(@documents --> Algorithm::LDA::LDAModel) {
my $stopwords = "stopwords/english".IO.lines.Set; my &tokenizer = ->$line { $line.words.map(*.lc).grep(->$w { ($stopwords !(cont)$w) and $w !~~ /^[ <:S> | <:P> ]+$/ }) };
my ($documents,$vocabs) = Algorithm::LDA::Formatter.from-plain(@documents.map({ my ($,$, *@body) = .words; @body.join(" ") }), &tokenizer);
my Algorithm::LDA $lda .= new(:$documents, :$vocabs); my Algorithm::LDA::LDAModel$model = $lda.fit(:num-topics(10), :num-iterations(500), :seed(2018));$model
}

sub plot-topic-distribution($model, @members, @documents,$search-regex = rx/Larry/) {
my $target-personid = @members.pairs.grep({ .value ~~$search-regex }).map(*.key).head;
my $docid = @documents.map({ my ($docid, $personid, *@body) = .words; %(docid =>$docid, personid => $personid, body => @body.join(" ")) })\ .grep({ .<personid> ==$target-personid and .<body> ~~ /:i << perl >>/}).map(*<docid>).head;

note("@documents[$docid] is selected"); my ($row-size, $col-size) =$model.document-topic-matrix.shape;
my @doc-topic = gather for ($docid X ^$col-size) -> ($i,$j) { take $model.document-topic-matrix[$i;$j]; } my Chart::Gnuplot$gnu .= new(:terminal("png"), :filename("topics.png"));
$gnu.command("set boxwidth 0.5 relative"); my AnyTicsTic @tics = @doc-topic.pairs.map({ %(:label(.key), :pos(.key)) });$gnu.legend(:off);
$gnu.xlabel(:label("Topic"));$gnu.ylabel(:label("P(z|theta,d)"));
$gnu.xtics(:tics(@tics));$gnu.plot(:vertices(@doc-topic.pairs.map({ @(.key, .value.exp) })), :style("boxes"), :fill("solid"));
$gnu.dispose; } sub write-nbest($model) {
my $topics :=$model.nbest-words-per-topic(10);
for ^(10/5) -> $part-i { say "|" ~ (^5).map(->$t { "topic { $part-i * 5 +$t }" }).join("|") ~ "|";
say "|" ~ (^5).map({ "----" }).join("|") ~ "|";
for ^10 -> $rank { say "|" ~ gather for ($part-i * 5)..^($part-i * 5 + 5) ->$topic {
take @($topics)[$topic;$rank].key; }.join("|") ~ "|"; } "".say; } } sub save-model($model) {
my @document-topic-matrix := $model.document-topic-matrix; my ($document-size, $topic-size) = @document-topic-matrix.shape; my$doctopicfh = open "document-topic.txt", :w;

$doctopicfh.say: ($document-size, $topic-size).join(" "); for ^$document-size -> $doc-i {$doctopicfh.say: gather for ^$topic-size ->$topic { take @document-topic-matrix[$doc-i;$topic] }.join(" ");
}
$doctopicfh.close; my @topic-word-matrix :=$model.topic-word-matrix;
my ($,$word-size) = @topic-word-matrix.shape;
my $topicwordfh = open "topic-word.txt", :w;$topicwordfh.say: ($topic-size,$word-size).join(" ");
for ^$topic-size ->$topic-i {
$topicwordfh.say: gather for ^$word-size -> $word { take @topic-word-matrix[$topic-i;$word] }.join(" "); }$topicwordfh.close;

my @vocabulary := $model.vocabulary; my$vocabfh = open "vocabulary.txt", :w;

$vocabfh.say($_) for @vocabulary;
$vocabfh.close; } my @documents = "documents.txt".IO.lines; my$model = create-model(@documents);
my @members = "members.txt".IO.lines;
plot-topic-distribution($model, @members, @documents); write-nbest($model);
save-model($model); First, we load the cleaned documents and call create-model: my @documents = "documents.txt".IO.lines; my$model = create-model(@documents);

create-model creates a LDA model by loading given documents:

sub create-model(@documents --> Algorithm::LDA::LDAModel) {
my $stopwords = "stopwords/english".IO.lines.Set; my &tokenizer = ->$line { $line.words.map(*.lc).grep(->$w { ($stopwords !(cont)$w) and $w !~~ /^[ <:S> | <:P> ]+$/ }) };
my ($documents,$vocabs) = Algorithm::LDA::Formatter.from-plain(@documents.map({ my ($,$, *@body) = .words; @body.join(" ") }), &tokenizer);
my Algorithm::LDA $lda .= new(:$documents, :$vocabs); my Algorithm::LDA::LDAModel$model = $lda.fit(:num-topics(10), :num-iterations(500), :seed(2018));$model
}

where $stopwords is a set of English stopwords from NLTK (I mentioned preliminary section), and &tokenizer is a custom tokenizer for Algorithm::LDA::Formatter.from-plain. The tokenizer converts given sentence as follows: 1. Splits given sentence by whitespace and makes a list of tokens. 1. Replaces each characters of the token with lower-case characters. 1. Deletes token that exists in the stopwords list or one-length token categorized as Symbol or Punctuation. Algorithm::LDA::Formatter.from-plain creates numerical native documents (i.e., each word in a document is mapped to its corresponding vocabulary id, and this id is represented by C int32) and vocabulary from a list of texts. After creating an Algorithm::LDA instance using the above numerical documents, we can start LDA estimation by Algorithm::LDA.fit. In this example, we set the number of topics to 10, and the number of iterations to 100, and the seed for srand to 2018. Next, we plot a document-topic distribution. Before this plotting, we load the saved members: my @members = "members.txt".IO.lines; plot-topic-distribution($model, @members, @documents);

plot-topic-distribution plots topic distribution with Chart::Gnuplot:

sub plot-topic-distribution($model, @members, @documents,$search-regex = rx/Larry/) {
my $target-personid = @members.pairs.grep({ .value ~~$search-regex }).map(*.key).head;
my $docid = @documents.map({ my ($docid, $personid, *@body) = .words; %(docid =>$docid, personid => $personid, body => @body.join(" ")) })\ .grep({ .<personid> ==$target-personid and .<body> ~~ /:i << perl >>/}).map(*<docid>).head;

note("@documents[$docid] is selected"); my ($row-size, $col-size) =$model.document-topic-matrix.shape;
my @doc-topic = gather for ($docid X ^$col-size) -> ($i,$j) { take $model.document-topic-matrix[$i;$j]; } my Chart::Gnuplot$gnu .= new(:terminal("png"), :filename("topics.png"));
$gnu.command("set boxwidth 0.5 relative"); my AnyTicsTic @tics = @doc-topic.pairs.map({ %(:label(.key), :pos(.key)) });$gnu.legend(:off);
$gnu.xlabel(:label("Topic"));$gnu.ylabel(:label("P(z|theta,d)"));
$gnu.xtics(:tics(@tics));$gnu.plot(:vertices(@doc-topic.pairs.map({ @(.key, .value.exp) })), :style("boxes"), :fill("solid"));
$gnu.dispose; } In this example, we plot topic distribution of a Larry Wall’s quotation (“Although the Perl Slogan is There’s More Than One Way to Do It, I hesitate to make 10 ways to do something.”): After the plotting, we call write-nbest: write-nbest($model);

In LDA, what topic XXX represents is expressed as a list of words. write-nbest writes a markdown style topic-word distribution table:

sub write-nbest($model) { my$topics := $model.nbest-words-per-topic(10); for ^(10/5) ->$part-i {
say "|" ~ (^5).map(-> $t { "topic {$part-i * 5 + $t }" }).join("|") ~ "|"; say "|" ~ (^5).map({ "----" }).join("|") ~ "|"; for ^10 ->$rank {
say "|" ~ gather for ($part-i * 5)..^($part-i * 5 + 5) -> $topic { take @($topics)[$topic;$rank].key;
}.join("|") ~ "|";
}
"".say;
}
}

The result is:

topic 0 topic 1 topic 2 topic 3 topic 4
would scotland black could one
itâ€s country mr. first work
believe one lot law new
one political play college human
took world official basic process
donâ€t must reacher language becomes
ever national five every good
far many car matter world
topic 5 topic 6 topic 7 topic 8 topic 9
apple united people like */
likely war would one die
company states i’m something und
jobs years know think quantum
even would think way play
steve american want things noble
life president get perl home
like human going long dog
end must say always student
small us go really ist

As you can see, the quotation of “Although the Perl Slogan is There’s More Than One Way to Do It, I hesitate to make 10 ways to do something.” contains “one”, “way” and “perl”. This is the reason why this quotation is mainly composed of topic 8.

For the next section, we save the model by save-model subroutine:

sub save-model($model) { my @document-topic-matrix :=$model.document-topic-matrix;
my ($document-size,$topic-size) = @document-topic-matrix.shape;
my $doctopicfh = open "document-topic.txt", :w;$doctopicfh.say: ($document-size,$topic-size).join(" ");
for ^$document-size ->$doc-i {
$doctopicfh.say: gather for ^$topic-size -> $topic { take @document-topic-matrix[$doc-i;$topic] }.join(" "); }$doctopicfh.close;

my @topic-word-matrix := $model.topic-word-matrix; my ($, $word-size) = @topic-word-matrix.shape; my$topicwordfh = open "topic-word.txt", :w;

$topicwordfh.say: ($topic-size, $word-size).join(" "); for ^$topic-size -> $topic-i {$topicwordfh.say: gather for ^$word-size ->$word { take @topic-word-matrix[$topic-i;$word] }.join(" ");
}
$topicwordfh.close; my @vocabulary :=$model.vocabulary;
my $vocabfh = open "vocabulary.txt", :w;$vocabfh.say($_) for @vocabulary;$vocabfh.close;
}

## Exercise 3: Create Quotation Search Engine

In this section, we create a quotation search engine which uses the model created in the previous section.
More specifically, we create LDA-based document model (Xing Wei and W. Bruce Croft 2006) and make a CLI tool that can search quotations. (Note that the words “token” and “word” are interchangable in this section)

The whole source code is:

use v6.c;

sub MAIN(Str :$query!) { my \doc-topic-iter = "document-topic.txt".IO.lines.iterator; my \topic-word-iter = "topic-word.txt".IO.lines.iterator; my ($document-size, $topic-size) = doc-topic-iter.pull-one.words; my ($, $word-size) = topic-word-iter.pull-one.words; my Num @document-topic[$document-size;$topic-size]; my Num @topic-word[$topic-size;$word-size]; for ^$document-size -> $doc-i { my \maybe-line := doc-topic-iter.pull-one; die "Error: Something went wrong" if maybe-line =:= IterationEnd; my Num @line = @(maybe-line).words>>.Num; for ^@line { @document-topic[$doc-i;$_] = @line[$_];
}
}

for ^$topic-size ->$topic-i {
my \maybe-line := topic-word-iter.pull-one;
die "Error: Something went wrong" if maybe-line =:= IterationEnd;
my Num @line = @(maybe-line).words>>.Num;
for ^@line {
@topic-word[$topic-i;$_] = @line[$_]; } } my %vocabulary = "vocabulary.txt".IO.lines.pairs>>.antipair.hash; my @members = "members.txt".IO.lines; my @documents = "documents.txt".IO.lines; my @docbodies = @documents.map({ my ($, $, *@body) = .words; @body.join(" ") }); my %doc-to-person = @documents.map({ my ($docid, $personid,$) = .words; %($docid =>$personid) }).hash;
my @query = $query.words.map(*.lc); my @sorted-list = gather for ^$document-size -> $doc-i { my Num$log-prob = gather for @query -> $token { my Num$log-ml-prob = Pml(@docbodies, $doc-i,$token);
my Num $log-lda-prob = Plda($token, $topic-size,$doc-i, %vocabulary, @document-topic, @topic-word);
take log-sum(log(0.2) + $log-ml-prob, log(0.8) +$log-lda-prob);
}.sum;
take %(doc-i => $doc-i, log-prob =>$log-prob);
}.sort({ $^b<log-prob> <=>$^a<log-prob> });

for ^10 {
my $docid = @sorted-list[$_]<doc-i>;
sprintf("\"%s\" by %s %f", @docbodies[$docid], @members[%doc-to-person{$docid}], @sorted-list[$_]<log-prob>).say; } } sub Pml(@docbodies,$doc-i, $token --> Num) { my Int$num-tokens = @docbodies[$doc-i].words.grep({ /:i^$token $/ }).elems; my Int$total-tokens = @docbodies[$doc-i].words.elems; return -100e0 if$total-tokens == 0 or $num-tokens == 0; log($num-tokens) - log($total-tokens); } sub Plda($token, $topic-size,$doc-i, %vocabulary is raw, @document-topic is raw, @topic-word is raw --> Num) {
gather for ^$topic-size ->$topic {
if %vocabulary{$token}:exists { take @document-topic[$doc-i;$topic] + @topic-word[$topic;%vocabulary{$token}]; } else { take -100e0; } }.reduce(&log-sum); } sub log-sum(Num$log-a, Num $log-b --> Num) { if$log-a < $log-b { return$log-b + log(1 + exp($log-a -$log-b))
} else {
return $log-a + log(1 + exp($log-b - $log-a)) } } At the beginning, we load the saved model and prepare @document-topic, @topic-word, %vocabulary, @documents, @docbodies, %doc-to-person and @members:  my \doc-topic-iter = "document-topic.txt".IO.lines.iterator; my \topic-word-iter = "topic-word.txt".IO.lines.iterator; my ($document-size, $topic-size) = doc-topic-iter.pull-one.words; my ($, $word-size) = topic-word-iter.pull-one.words; my Num @document-topic[$document-size;$topic-size]; my Num @topic-word[$topic-size;$word-size]; for ^$document-size -> $doc-i { my \maybe-line = doc-topic-iter.pull-one; die "Error: Something went wrong" if maybe-line =:= IterationEnd; my Num @line = @(maybe-line).words>>.Num; for ^@line { @document-topic[$doc-i;$_] = @line[$_];
}
}

for ^$topic-size ->$topic-i {
my \maybe-line = topic-word-iter.pull-one;
die "Error: Something went wrong" if maybe-line =:= IterationEnd;
my Num @line = @(maybe-line).words>>.Num;
for ^@line {
@topic-word[$topic-i;$_] = @line[$_]; } } my %vocabulary = "vocabulary.txt".IO.lines.pairs>>.antipair.hash; my @members = "members.txt".IO.lines; my @documents = "documents.txt".IO.lines; my @docbodies = @documents.map({ my ($, $, *@body) = .words; @body.join(" ") }); my %doc-to-person = @documents.map({ my ($docid, $personid,$) = .words; %($docid =>$personid) }).hash;

Next, we set @query using option :$query: my @query =$query.words.map(*.lc);

After that, we compute the probability of P(query|document) based on Eq. 9 of the aforementioned paper (Note that we use logarithm to avoid undeflow and set the parameter mu to zero) and sort them.

    my @sorted-list = gather for ^$document-size ->$doc-i {
my Num $log-prob = gather for @query ->$token {
my Num $log-ml-prob = Pml(@docbodies,$doc-i, $token); my Num$log-lda-prob = Plda($token,$topic-size, $doc-i, %vocabulary, @document-topic, @topic-word); take log-sum(log(0.2) +$log-ml-prob, log(0.8) + $log-lda-prob); }.sum; take %(doc-i =>$doc-i, log-prob => $log-prob); }.sort({$^b<log-prob> <=> $^a<log-prob> }); Plda adds logarithmic topic given document probability (i.e., lnP(topic|theta,document)) and word given topic probability (i.e., lnP(word|phi,topic)) for each topic, and sums them by .reduce(&log-sum);: sub Plda($token, $topic-size,$doc-i, %vocabulary is raw, @document-topic is raw, @topic-word is raw --> Num) {
gather for ^$topic-size ->$topic {
if %vocabulary{$token}:exists { take @document-topic[$doc-i;$topic] + @topic-word[$topic;%vocabulary{$token}]; } else { take -100e0; } }.reduce(&log-sum); } and Pml (ml means Maximum Likelihood) counts $token and normalizes it by the number of the total tokens in the document (Note that this computation is also conducted in log space):

sub Pml(@docbodies, $doc-i,$token --> Num) {
my Int $num-tokens = @docbodies[$doc-i].words.grep({ /:i^ $token$/ }).elems;
my Int $total-tokens = @docbodies[$doc-i].words.elems;
return -100e0 if $total-tokens == 0 or$num-tokens == 0;
log($num-tokens) - log($total-tokens);
}

OK, then let’s execute!

query “perl”:

$perl6 search-quotation.p6 --query="perl" "Perl will always provide the null." by Larry Wall -3.301156 "Perl programming is an *empirical* science!" by Larry Wall -3.345189 "The whole intent of Perl 5's module system was to encourage the growth of Perl culture rather than the Perl core." by Larry Wall -3.490238 "I dunno, I dream in Perl sometimes..." by Larry Wall -3.491790 "At many levels, Perl is a 'diagonal' language." by Larry Wall -3.575779 "Almost nothing in Perl serves a single purpose." by Larry Wall -3.589218 "Perl has a long tradition of working around compilers." by Larry Wall -3.674111 "As for whether Perl 6 will replace Perl 5, yeah, probably, in about 40 years or so." by Larry Wall -3.684454 "Well, I think Perl should run faster than C." by Larry Wall -3.771155 "It's certainly easy to calculate the average attendance for Perl conferences." by Larry Wall -3.864075 query “apple”: $ perl6 search-quotation.p6 --query="apple"
"Steve Jobs is the"With phones moving to technologies such as Apple Pay, an unwillingness to assure security could create a Target-like exposure that wipes Apple out of the market." by Rob Enderle -3.841538
"*:From Joint Apple / HP press release dated 1 January 2004 available [http://www.apple.com/pr/library/2004/jan/08hp.html here]." by Carly Fiorina -3.904489
"Samsung did to Apple what Apple did to Microsoft, skewering its devoted users and reputation, only better. ... There is a way for Apple to fight back, but the company no longer has that skill, and apparently doesn't know where to get it, either." by Rob Enderle -3.940359
"[W]hen it came to the iWatch, also a name that Apple didn't own, Apple walked away from it and instead launched the Apple Watch. Certainly, no risk of litigation, but the product's sales are a fraction of what they otherwise might have been with the proper name and branding." by Rob Enderle -4.152145
"[W]hen Apple wanted the name "iPhone" and it was owned by Cisco, Steve Jobs just took it, and his legal team executed so he could keep it. It turned out that doing this was surprisingly inexpensive. And, as the Apple Watch showcased, the Apple Phone likely would not have sold anywhere near as well as the iPhone." by Rob Enderle -4.187223
"The cause of [Apple v. Qualcomm] appears to be an effort by Apple to pressure Qualcomm into providing a unique discount, largely because Apple has run into an innovation wall, is under increased competition from firms like Samsung, and has moved to a massive cost reduction strategy. (I've never known this to end well, as it causes suppliers to create unreliable components and outright fail.)" by Rob Enderle -4.318575
"Apple tends to aggressively work to not discover problems with products that are shipped and certainly not talk about them." by Rob Enderle -4.380863
"Apple no longer owns the tablet market, and will likely lose dominance this year or next. ... this level of sustained dominance doesn't appear to recur with the same vendor even if it launched the category." by Rob Enderle -4.397954
"Apple is becoming more and more like a typical tech firm â€” that is, long on technology and short on magic. ... Apple is drifting closer and closer to where it was back in the 1990s. It offers advancements that largely follow those made by others years earlier, product proliferation, a preference for more over simple elegance, and waning excitement." by Rob Enderle -4.448473
"[T]he litigation between Qualcomm and Apple/Intel ... is weird. What makes it weird is that Intel appears to think that by helping Apple drive down Qualcomm prices, it will gain an advantage, but since its only value is as a lower cost, lower performing, alternative to Qualcomm's modems, the result would be more aggressively priced better alternatives to Intel's offerings from Qualcomm/Broadcom, wiping Intel out of the market. On paper, this is a lose/lose for Intel and even for Apple. The lower prices would flow to Apple competitors as well, lowering the price of competing phones. So, Apple would not get a lasting benefit either." by Rob Enderle -4.469852 Ronald McDonald of Apple, he is the face." by Rob Enderle -3.822949
"With phones moving to technologies such as Apple Pay, an unwillingness to assure security could create a Target-like exposure that wipes Apple out of the market." by Rob Enderle -3.849055
"*:From Joint Apple / HP press release dated 1 January 2004 available [http://www.apple.com/pr/library/2004/jan/08hp.html here]." by Carly Fiorina -3.895163
"Samsung did to Apple what Apple did to Microsoft, skewering its devoted users and reputation, only better. ... There is a way for Apple to fight back, but the company no longer has that skill, and apparently doesn't know where to get it, either." by Rob Enderle -4.052616
"*** The previous line contains the naughty word '$&'.\n if /(ibm|apple|awk)/; # :-)" by Larry Wall -4.088445 "The cause of [Apple v. Qualcomm] appears to be an effort by Apple to pressure Qualcomm into providing a unique discount, largely because Apple has run into an innovation wall, is under increased competition from firms like Samsung, and has moved to a massive cost reduction strategy. (I've never known this to end well, as it causes suppliers to create unreliable components and outright fail.)" by Rob Enderle -4.169533 "[T]he litigation between Qualcomm and Apple/Intel ... is weird. What makes it weird is that Intel appears to think that by helping Apple drive down Qualcomm prices, it will gain an advantage, but since its only value is as a lower cost, lower performing, alternative to Qualcomm's modems, the result would be more aggressively priced better alternatives to Intel's offerings from Qualcomm/Broadcom, wiping Intel out of the market. On paper, this is a lose/lose for Intel and even for Apple. The lower prices would flow to Apple competitors as well, lowering the price of competing phones. So, Apple would not get a lasting benefit either." by Rob Enderle -4.197869 "Apple tends to aggressively work to not discover problems with products that are shipped and certainly not talk about them." by Rob Enderle -4.204618 "Today's tech companies aren't built to last, as Apple's recent earnings report shows all too well." by Rob Enderle -4.209901 "[W]hen it came to the iWatch, also a name that Apple didn't own, Apple walked away from it and instead launched the Apple Watch. Certainly, no risk of litigation, but the product's sales are a fraction of what they otherwise might have been with the proper name and branding." by Rob Enderle -4.238582 # Conclusions In this article, we explored Wikiquote and created a LDA model using Algoritm::LDA. After that we built an information retrieval application. Thanks for reading my article! See you next time! # Citations • Blei, David M. “Probabilistic topic models.” Communications of the ACM 55.4 (2012): 77-84. • Wei, Xing, and W. Bruce Croft. “LDA-based document models for ad-hoc retrieval.” Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2006. # License ## Perl 6 Advent Calendar: Day 23 – Blin, it’s Christmas soon! ### Published by AlexDaniel on 2018-12-23T00:00:15 I’ve already mentioned Bisectable in one of the advent posts two years ago, but since then a lot has changed, so I think it’s time to give a brief history of the bisectable bot and its friends. First of all, let’s define the problem that is being solved. Sometimes it happens that a commit introduces an unintended change in behavior (a bug). Usually we call that a regression, and in some cases the easiest way to figure out what went wrong and fix it is to first find which commit introduced the regression. There are exactly 9000 commits between Rakudo 2015.12 and 2018.12, and even though it’s not over 9000, that’s still a lot. Luckily, we don’t need to test all of the revisions. Assuming that the behavior wasn’t changing back and forth all the time, we can use binary search. ## git bisect and binary search Basically, given any commit range, we take a commit in the “middle” of the range and test it. If it’s “bad” or if it shows the “new” (now incorrect) behavior, then we can throw away the second half of our range (because we know that the change must have happened before that commit or exactly on that commit). Similarly we throw away the other half if it is “good” (or “old”). So instead of testing all 9000 commits we can just check about log n revisions (≈13). Git comes with git bisect command which implements the binary search logic for you. All you have to do is give it some starting points and then for every commit it jumps to, tell if it is good/bad. If you do that enough times, it’ll tell you which commit is at fault. That’s all good, but there are two problems with it. ## Problem ❶: Skipping Let’s imagine a situation where 2 + 2 used to return 4 (correct!), but now returns 42 (… also right, but not quite). So you kick off the bisection process, git jumps between revisions, you test them. If it’s 4 then it’s good (or old), if it’s 42 then it is bad (or new). But then you stumble upon this behavior: > 2 + 2 Merry Christmas!  … Now what? Clearly that specific revision is somewhat special. We can’t tell if our bug is present or not, we simply can’t know. Yes, it doesn’t print 4, but we are looking for a very specific bug, so it doesn’t classify as “new” behavior either. Of course, we can toss a coin and mark it randomly as old or new, and hope for a Christmas miracle… but that has a 50% probability (if we see only one of these) to divert the binary search into the wrong direction. For these cases git provides a special skip command. If you are testing manually, then it is somewhat straightforward to handle these revisions (as long as you remember that you should skip them). However, because of problem ❷, a lot of people are tempted to use git bisect run which automates the process with a script. It is possible to skip revisions using a script too (use exit code 125), but it is not that obvious how to figure out which revisions should be skipped. ## Problem ❷: Build times Let’s take the optimistic figure of 13 to estimate the amount of revisions that we are going to test. Remember that it doesn’t include commits that we will have to skip, and possibly other extra builds that we might want to test. The amount of time it takes to build rakudo varies depending on the hardware, but let’s optimistically say that it takes us 2 minutes to build rakudo on a particular commit and test it. 13 × 2 = 26 (minutes)  That’s not very convenient, right? And if something goes wrong during the process… you start over, and then you wait. ## Bisectable In 2016, after seeing the pain of those who have to run git bisect manually (actually, mostly myself), I wondered: <AlexDaniel> has anybody thought about building rakudo for every single commit, so that you can quickly run git bisect? The cost-benefit analysis of the idea was promptly questioned: <perlpilot> AlexDaniel: do you believe that bisects will be common in the future? To which I provided a very detailed justification: <AlexDaniel> perlpilot: yes Three days later, the bot joined the channel. The reactions were quite interesting to see: <moritz> woah <tadzik> wow <RabidGravy> OoOOOoooh <llfourn> Cooooool Little did we know back then. Even I had no idea how useful it will turn out. Fast forward 2 years: <lizmat> with regards to size of commits: I try to keep them as small and contained as possible, to allow for easier bisecting <lizmat> in that sense, bisectable6 has changed the way I code <lizmat> also: bisectable6 has made me worry less about changes I commit <lizmat> because it usually limits the places to look for fixing an issue so much, that they can be fixed within minutes rather than hours <lizmat> or at least show the cause very quickly (so the short-time fix may mean a revert) <AlexDaniel> \o/ But it wasn’t always perfect. About one hour after the introduction of the bot, it was used for its purpose: <moritz> bisect: try { NaN.Rat == NaN; exit 0 }; exit 1 <bisectable> moritz: (2016-05-02) https://github.com/rakudo/rakudo/commit/949a7c7 However, because of an off-by-one, it returned the wrong commit. The actual commit was e2f1fa7, and 949a7c7 is its parent. Honestly, the bot was very bad back then. For example, it fully relied on the exit code, so you couldn’t just throw 2 + 2 into it and expect it to check the output. Eventually, different modes were implemented, and nowadays the bot first checks the behavior on the starting points (e.g. 2015.12 and HEAD), and determines the best strategy to perform the bisection. For example, if the signal is different (e.g. a SEGV), then it bisects based on the signal. If the signal is same, but the exit code is different, then it uses the exit code. If all else can’t be used, it bisects using the output. Keep in mind that bisectable checks for you if perl6 binary can’t be built. This means that in most cases you don’t need to add your own logic for skipping. Not only it brought the bisection time from tens of minutes to a few seconds, it also gives results that are more reliable/correct. ## Storage Some time later the commit range was expanded to 2014.01HEAD, meaning all commits starting from the first ever Rakudo on Moar release. Currently it has over 17000 builds. It may sound like a lot, but with every rakudo installation taking just ≈28 MB, that’s not too much. Having a few TB of storage should get you going for a few years to come. That being said, I don’t have that luxury on my server. It has a RAID of 120 GB SSDs, so the whole thing not only has to fit into that little amount of space, but it should also leave enough space for the rest of the system. There was a lot of experimentation (one, two) involved in figuring out the best strategy to save space, but long story short, we can go as low as about half a megabyte per build! Of course, it is always a tradeoff between the compression ratio and decompression speed, but using modern compression algorithms (zstd, lrzip) everything is relatively easy. ## More bots, more better Shortly after Bisectable was released, people saw an opportunity for other tools. Want to run some code on a specific commit? Sure, here’s a bot for that. Want to download a prebuilt rakudo archive instead of wasting your own cpu time? Yes, there’s another bot. Want to graph some info about rakudo? Of course there’s a bot for that! And it continued until we reached the total of 17 bots! Some argue that these bots should stop multiplying like that, and perhaps people are right. But I guess the point is that now it is extremely easy to build upon Whateverable to create more tools for developers, which is of course great. ## OK, now what? So bisectable can bisect across thousands of commits in no time. It consumes very little storage space, and it doesn’t require full understanding of the bisection process from the user. Now that the bisection is free and easy, can we do more? ## Yes, Blin! You may have heard about Toaster. Toaster is a tool that attempts to install every module in the ecosystem on two or more revisions. For example, let’s say that the last release was 2018.12 and the release manager is about to cut a rakudo release from master HEAD. You can then run toaster on 2018.12 and master, and it will show which modules used to install cleanly but no longer do. That gives us the information that something is likely wrong in Rakudo, but doesn’t tell what exactly. Given that this post was mostly about Bisectable, you can probably guess where this is going. ## Project Blin – Toasting Reinvented Blin is a quality assurance tool for Rakudo releases. It is used to find regressions in rakudo, but unlike Toaster, not only it tells which modules are no longer installable, it also bisects rakudo to find out which commit caused the issue. Of course, it is built around Whateverable, so that extra functionality doesn’t cost much (and doesn’t even require a lot of code). As a bonus, it generates nice graphs to visualize how the issue propagates from module dependencies (though that is not very common). One important feature of Blin is that it tries to install every module just once. So if module B depends on module A, A will be tested and installed once, and then reused for the testing of B. Because this process is parallelized, you may wonder how it was implemented. Basically, it uses the underrated react/whenever feature: # slightly simplified react { for @modules ->$module {
whenever Promise.allof($module.depends.keys».done) { start { process-module$module, … }
}
}
}


For every module (we have more than 1200 now) it creates its own whenever block which fires when its dependencies are satisfied. In my opinion, that’s the whole implementation of the main logic in Blin, everything else is just glue to get Whateverable and Zef working together to achieve what we need, + some output generation.

In some way, Blin didn’t change much in the way we do quality assurance for Rakudo. Toaster was already able to give us some basic info (albeit slower) so that we could start the investigation, and in the past I was known for shoving weird things (e.g. full modules with dependencies) into bisectable. It’s just that now it is much easier, and when The Day comes, I won’t be punished for robot abuse.

## Future

Whateverable and Blin together have 243 open issues. Both projects work great and are very useful, but as we say, they are Less Than Awesome. Most issues are relatively easy and fun to work with, but they require time. If there’s anything you can help with or if you want to maintain these projects, please feel free to do so. And if you want to build your own tools based on Whateverable (which we probably need a lot!), see this hello world gist.

,

## Perl 6 Advent Calendar: Day 22 – Testing Cro HTTP APIs

A good amount of my work time this year has been spent on building a couple of Perl 6 applications. After a decade of contributing to Perl 6 compiler and runtime development, it feels great to finally be using it to deliver production solutions solving real-world problems. I’m still not sure whether writing code in an IDE I founded, using a HTTP library I designed, compiled by a compiler I implemented large parts of, and running on a VM that I play architect for, makes me one of the world’s worst cases of “Not Invented Here”, or just really Full Stack.

Whatever I’m working on, I highly value automated testing. Each passing test is something I know works – and something that I won’t break as I evolve the software in question. Even with automated tests, bugs happen, but adding a test to cover the bug at least means I’ll make different bugs in the future, which is perhaps a bit more forgivable.

Most of the code, and complexity, in the system I’m currently working on is in its domain objects. Those are reached through a HTTP API, implemented using Cro – and like the rest of the system, this HTTP API has automated tests. They use one old module of mine – Test::Mock – along with a new module released this year, Cro::HTTP::Test. In today’s advent post, I’ll discuss how I’ve been using them together, with results that I find quite pleasing.

## A sample problem

It’s the advent calendar, so of course I need a sufficiently festive example problem. For me, one of the highlights of Christmas time in Central Europe is the Christmas markets, many set on beautiful historic city squares. And what, aside from sausage and mulled wine, do we need on that square? A tall, handsome Christmas tree, of course! But how to find the best tree? Well, we get the internet to help, by building a system where they can submit suggestions of trees they’ve seen that might be suitable. What could possibly go wrong?

One can PUT to a route /trees/{latitude}/{longitude} to submit a candidate tree at that location. The expected payload is a JSON blob with a tree height, and a text description of 10-200 characters explaining why the tree is so awesome. If a tree in the same location has already been submitted, a 409 Conflict response should be returned. If the tree is accepted, then a simple 200 OK response will be produced, with a JSON body describing the tree.

A GET of the same URI will return a description of the tree in question, while a GET to /trees will return the submitted trees, tallest first.

## Testability

Back in highschool, science classes were certainly among my favorite. Now and then, we got to do experiments. Of course, each experiment needed writing up – both the planning before, the results, and an analysis of them. One of the most important parts of the planning was about how to ensure a “fair test”: how would we try control all of the things we weren’t trying to test, so that we could trust in our observations and draw conclusions from them?

Testing in software involves much the same thought process: how do we exercise the component(s) we’re interested in, while controlling the context they operate in? Sometimes, we get lucky, and we’re testing pure logic: it doesn’t depend on anything other than the things we give it to work with. In fact, we can create our own luck in this regard, spotting parts of our system that can be pure functions or immutable objects. To take examples from the current system I’m working on:

• We have an object model built up from a bunch of specification files.The process of building it is pretty involved, with a bunch of sanity checks, a few graph algorithms, and so forth. But the result is a bunch of immutable objects. Once constructed, they never change. Testing is easy: throw a bunch of test input in, and check that it builds the expected objects.
• We have a small language with an evaluator. The data used by expressions in the language is passed in as an argument to the evaluator, and then we can check the result is what is expected. Thus, the evaluator is a pure function.

So, the first thing to do for testability is to find the bits of the system that can be like this and build them that way. Alas, not all things are so simple. HTTP APIs are often a gateway to mutable state, database operations, and so forth. Further, a good HTTP API will map error conditions from the domain level into appropriate HTTP status codes. We’d like to be able to create such situations in our tests, so as to cover them. This is where a tool like Test::Mock comes in – but to use it, we need to factor our Cro service in a way that is test-friendly.

## Stubbing a service

For those new to Cro, let’s take a look at the bare minimum we can write to get a HTTP service up and running, serving some fake data about trees.

use Cro::HTTP::Router;
use Cro::HTTP::Server;

my $application = route { get -> 'trees' { content 'application/json', [ { longitude => 50.4311548, latitude => 14.586079, height => 4.2, description => 'Nice color, very bushy' }, { longitude => 50.5466504, latitude => 14.8438714, height => 7.8, description => 'Really tall and wide' }, ] } } my$server = Cro::HTTP::Server.new(:port(10000), :$application);$server.start;
react whenever signal(SIGINT) {
$server.stop; exit; } This isn’t a great setup for being able to test our routes, however. Better would be to put the routes into a subroutine in a module lib/BestTree.pm6: unit module BestTree; use Cro::HTTP::Router; sub routes() is export { route { get -> 'trees' { content 'application/json', [ { longitude => 50.4311548, latitude => 14.586079, height => 4.2, description => 'Nice color, very bushy' }, { longitude => 50.5466504, latitude => 14.8438714, height => 7.8, description => 'Really tall and wide' }, ] } } } And use it from the script: use BestTree; use Cro::HTTP::Server; my$application = routes();
my $server = Cro::HTTP::Server.new(:port(10000), :$application);
$server.start; react whenever signal(SIGINT) {$server.stop;
exit;
}

Now, if we had something that could be used to test that route blocks do the right thing, we could use this module, and get on with our testing.

## Stores, models, etc.

There’s another problem, however. Our Christmas tree service will be stashing the tree information away in some database, as well as enforcing the various rules. Where should this logic go?

There’s many ways we might choose to arrange this code, but the key thing is that this logic does not belong in our Cro route handlers. Their job is to map between the domain objects and the world of HTTP, for example turning domain exceptions into appropriate HTTP error responses. That mapping is what we’ll want to test.

So, before we continue, let’s define how some of those things look. We’ll have a BestTree::Tree class that represents a tree:

class BestTree::Tree {
has Rat $.latitude; has Rat$.longitude;
has Rat $.height; has Str$.description;
}

And we’ll work with a BestTree::Store object. We won’t actually implement this as part of this post; it will be what we fake in our tests.

class BestTree::Store {
method all-trees() { ... }
method suggest-tree(BestTree::Tree $tree --> Nil) { ... } method find-tree(Rat$latitude, Rat $longitude --> BestTree::Tree) { ... } } But how can we arrange things so we can take control of the store that is used by the routes, for testing purposes? One easy way is to make it a parameter to our routes subroutine, meaning it will be available in the route block: sub routes(BestTree::Store$store) is export {
...
}

This is a functional factoring. Some folks may prefer to use some kind of OO-based Dependency Injection, using some kind of container. That can work fine with Cro too: just have a method that returns the route block. (If building something non-tiny with Cro, check out the documentation on structuring services for some further advice on this front.)

## Getting a list of trees

Now we’re ready to start writing tests! Let’s stub the test file:

use BestTree;
use BestTree::Store;
use Cro::HTTP::Test;
use Test::Mock;
use Test;

# Tests will go here

done-testing;

We use BestTree, which contains the routes we want to test, along with:

• Cro::HTTP::Test, which we will use to easily write tests for our routes
• Test::Mock, which we’ll use to fake the store
• Test, which we don’t strictly need, but having access to subtest will
let us produce more organized test output

Next, we’ll make a couple of tree objects to use in our tests:

my $fake-tree-a = BestTree::Tree.new: latitude => 50.4311548, longitude => 14.586079, height => 4.2, description => 'Nice color, very bushy'; my$fake-tree-b = BestTree::Tree.new:
latitude => 50.5466504,
longitude => 14.8438714,
height => 7.8,
description => 'Really tall and wide';

And here comes the first test:

subtest 'Get all trees' => {
my $fake-store = mocked BestTree::Store, returning => { all-trees => [$fake-tree-a, $fake-tree-b] }; test-service routes($fake-store), {
test get('/trees'),
status => 200,
json => [
{
latitude => 50.4311548,
longitude => 14.586079,
height => 4.2,
description => 'Nice color, very bushy'
},
{
latitude => 50.5466504,
longitude => 14.8438714,
height => 7.8,
description => 'Really tall and wide'
}
];
check-mock $fake-store, *.called('all-trees', times => 1, with => \()); } } First, we make a fake of BestTree::Store that, whenever all-trees is called, will return the fake data we specify. We then use test-service, passing in the route block created with the fake store. All test calls within the block that follows will be executed against that route block. Notice that we don’t need to worry about running a HTTP server here to host the routes we want to test. In fact, due to the pipeline architecture of Cro, it’s easily possible for us to take the Cro HTTP client, wire its TCP message output to put the data it would send into a Perl 6 Channel, and then have that data pushed into the server pipeline’s TCP message input pipeline, and vice versa. This means that we test things all the way down to the bytes that are sent and received, but without actually having to hit even the local network stack. (Aside: you can also use Cro::HTTP::Test with a URI, which means if you really wanted to spin up a test server, or even wanted to write tests against some other service running in a different process, you could do it.) The test routine specifies a test case. Its first argument describes the request that we wish to perform – in this case, a get to /trees. The named arguments then specify how the response should look. The status check ensures we get the expected HTTP status code back. The json check is really two in one: • It checks that the HTTP content-type is a JSON one • It checks that the body deserializes to the supplied JSON (if you don’t want to test every single piece of it, pass a block there, which should evaluate to True) If that’s all we did, and we ran our tests, we’d find they mysteriously pass, even though we didn’t yet edit our route block’s get handler to actually use the store! Why? Because it turns out I was lazy and used the data from my earlier little server example as my test data here. No worries, though: to make the test stronger, we can add a call to check-mock, and then assert that our fake store really did have the all-trees method called once, and with no arguments passed. That just leaves us to make the test pass, by implementing the handler properly: get -> 'trees' { content 'application/json', [$store.all-trees.map: -> $tree { { latitude =>$tree.latitude,
longitude => $tree.longitude, height =>$tree.height,
description => $tree.description } } ] } ## Getting a tree Time for the next test: getting a tree. There are two cases to consider here: the one where the tree is found, and the one where the tree is not found. Here’s a test for the case where a tree is found: subtest 'Get a tree that exists' => { my$fake-store = mocked BestTree::Store, returning => {
find-tree => $fake-tree-b }; test-service routes($fake-store), {
test get('/trees/50.5466504/14.8438714'),
status => 200,
json => {
latitude => 50.5466504,
longitude => 14.8438714,
height => 7.8,
description => 'Really tall and wide'
};
check-mock $fake-store, *.called('find-tree', times => 1, with => \(50.5466504, 14.8438714)); } } Running this now fails. In fact, the status code check fails first, because we didn’t implement the route yet, and so get 404 back, not the expected 200. So, here’s an implementation to make it pass:  get -> 'trees', Rat()$latitude, Rat() $longitude { given$store.find-tree($latitude,$longitude) -> $tree { content 'application/json', { latitude =>$tree.latitude,
longitude => $tree.longitude, height =>$tree.height,
description => $tree.description } } } Part of this looks somewhat familiar from the other route, no? So, with two passing tests, let’s go forth and refactor: get -> 'trees' { content 'application/json', [$store.all-trees.map(&tree-for-json)];
}

get -> 'trees', Rat() $latitude, Rat()$longitude {
given $store.find-tree($latitude, $longitude) ->$tree {
content 'application/json', tree-for-json($tree); } } sub tree-for-json(BestTree::Tree$tree --> Hash) {
return {
latitude => $tree.latitude, longitude =>$tree.longitude,
height => $tree.height, description =>$tree.description
}
}

And the tests pass, and we know our refactor is good. But wait, what about if there is no tree there? In that case, the store will return Nil. We’d like to map that into a 404. Here’s another test:

subtest 'Get a tree that does not exist' => {
my $fake-store = mocked BestTree::Store, returning => { find-tree => Nil }; test-service routes($fake-store), {
test get('/trees/50.5466504/14.8438714'),
status => 404;
check-mock $fake-store, *.called('find-tree', times => 1, with => \(50.5466504, 14.8438714)); } } Which fails, in fact, with a 500 error, since we didn’t consider that case in our route block. Happily, this one is easy to deal with: turn out given into a with, which checks we got a defined object, and then add an else and produce the 404 Not Found response. get -> 'trees', Rat()$latitude, Rat() $longitude { with$store.find-tree($latitude,$longitude) -> $tree { content 'application/json', tree-for-json($tree);
}
else {
not-found;
}
}

## Submitting a tree

Last but not least, let’s test the route for suggesting a new tree. Here’s the successful case:

subtest 'Suggest a tree successfully' => {
my $fake-store = mocked BestTree::Store; test-service routes($fake-store), {
my %body = description => 'Awesome tree', height => 4.25;
test put('/trees/50.5466504/14.8438714', json => %body),
status => 200,
json => {
latitude => 50.5466504,
longitude => 14.8438714,
height => 4.25,
description => 'Awesome tree'
};
check-mock $fake-store, *.called('suggest-tree', times => 1, with => :( BestTree::Tree$tree where {
.latitude == 50.5466504 &&
.longitude == 14.8438714 &&
.height == 4.25 &&
.description eq 'Awesome tree'
}
));
}
}

This is mostly familiar, except the check-mock call looks a little different this time. Test::Mock lets us test the arguments in two different ways: with a Capture (as we’ve done so far) or with a Signature. The Capture case is great for all of the simple cases, where we’re just dealing with boring values. However, once we get in to reference types, or if we don’t actually care about exact values and just want to assert the things we care about, a signature gives us the flexibility to do that. Here, we use a where clause to check that the tree object that the route handler has constructed contains the expected data.

Here’s the route handler that does just that:

put -> 'trees', Rat() $latitude, Rat()$longitude {
request-body -> (Rat(Real) :$height!, Str :$description!) {
my $tree = BestTree::Tree.new: :$latitude, :$longitude, :$height, :$description;$store.suggest-tree($tree); content 'application/json', tree-for-json($tree);
}
}

Notice how Cro lets us use Perl 6 signatures to destructure the request body. In one line, we’ve said:

• The request body must have height and description
• That we want the height to be a Real number
• That we want the description to be a string

Should any of those fail, Cro will automatically produce a 400 bad request for us. In fact, we can write tests to cover that – along with a new test to make sure a conflict will result in a 409.

subtest 'Problems suggesting a tree' => {
my $fake-store = mocked BestTree::Store, computing => { suggest-tree => { die X::BestTree::Store::AlreadySuggested.new; } } test-service routes($fake-store), {
test put('/trees/50.5466504/14.8438714', json => {}),
status => 400;
my %bad-body = description => 'ok';
status => 400;
status => 400;

# Conflict.
my %body = description => 'Awesome tree', height => 4.25;
test put('/trees/50.5466504/14.8438714', json => %body),
status => 409;
}
}

The main new thing here is that we’re using computing instead of returning with mocked. In this case, we pass a block, and it will be executed. (The block does not get the method arguments, however. If we want to get those, there is a third option, overriding, where we get to take the arguments and write a fake method body.)

And how to handle this? By making our route handler catch and map the typed exception:

put -> 'trees', Rat() $latitude, Rat()$longitude {
request-body -> (Rat(Real) :$height!, Str :$description!) {
my $tree = BestTree::Tree.new: :$latitude, :$longitude, :$height, :$description;$store.suggest-tree($tree); content 'application/json', tree-for-json($tree);
CATCH {
conflict;
}
}
}
}

## Closing thoughts

With Cro::HTTP::Test, there’s now a nice way to write HTTP tests in Perl 6. Put together with a testable design, and perhaps a module like Test::Mock, we can also isolate our Cro route handlers from everything else, easing their testing.

The logic in our route handlers here is relatively straightforward; small sample problems usually are. Even here, however, I find there’s value in the journey, rather than only in the destination. The act of writing tests for a HTTP API puts me in the frame of mind of whoever will be calling the API, which can be a useful perspective to have. Experience also tells that tests “too simple to fail” do end up catching mistakes: the kinds of mistakes I might assume I’m too smart to make. Discipline goes a long way. On which note, I’ll now be disciplined about taking a break from the keyboard now and then, and go enjoy a Christmas market. -Ofun!

## Perl 6 Advent Calendar: Day 21 – A Red Secret Santa

The year is ending and we have a lot to celebrate! What is a better way to celebrate the end of the year than with our family and friends? To help achieve that, here at my home, we decided to run a Secret Santa Game! So, my goal is to write a Secret Santa Program! That’s something where I can use this wonderful project called Red.

Red is an ORM (Object Relational Model) for perl6 still under development and not published as a module yet. But it’s growing and it is close to a release.

So let’s create our first table: a table that will store the people participating in our Secret Santa. To the code:

Red maps relational databases to OOP. Each table is mapped to a Red class (model), each of whose objects represents a row.

The way we a create a model is by using the model special word. A model is just a normal class that extends Red::Model and has a MetamodelX::Red::Model‘s object as its metaclassRed does not add any methods you didn’t explicitly create to its models. So to interact with the database you should use the metaclass.

But let’s continue.

The code creates a new model called Person. The name of the table this model represents will be the same name as the model: “Person”. If necessary, you can change the name of the table with the is table<...> trait (for example: model Person is table<another_name> {...}).

This model has 3 attributes:

• $.name have an is column trait; •$.email have an is column{ :nullable };
• and $.id has an is serial. That means the same as is column{ :id, :auto-increment }. Red uses not null columns by default, so if you want to create a nullable columns you should use is column{ :nullable }. So all attributes on Person are columns. The is serial (I mean the :id part) means that it’s the table’s primary key. After that it’s setting a dynamic variable ($*RED-DB) for the result of database "SQLite". The database sub receives the driver‘s name and the parameters it expects.

In this case it uses the SQLite driver and if you don’t pass any argument, it will use it as an in memory database. If you want to use a file named secret-santa.db as the database file, you can do database "SQLite", :database<secret-santa.db>. Or, if you want to use a local Postgres, just use  database "Pg"Red uses the variable $*RED-DB to know what database to use. OK, now lets create the table! As I said before, Red did not add any methods you didn’t explicitly ask for. So, to create the table a metaclassmethod is used. Person.^create-table is how you create the table. This will run: Now we should insert some data. We do that with another meta method (.^create). The .^create meta method expects the same arguments .new expects. Each named argument will set an attribute with the same name. .^create will create a new Person object, save it in the database (with .^save: :insert), and return it. It runs: Every model has a ResultSeq. That is a sequence that represents every row on the table. We can get its ResultSeq with .^all (or .^rs). ResultSeq has some methods to help you to get information from the table, for example: .grep will filter the rows (as it does in a normal Seq) but it doesn’t do that in memory, it returns a new ResultSeq with that filter set. When its iterator is retrieved, it runs a SQL query using everything set on the ResultSeq. In our example, Person.^all.grep(*.email.defined).map: *.name will run a query like: And it’ll print: FernandoAline Okay, we have a code that can save who is entered in our Secret Santa game. But each one on it want different gifts. How can we know the wishes of each one? Let’s modify the code to make it save the wishlist for everyone participating in the secret santa: That prints: Fernando Comma => https://commaide.com perl6 books => https://perl6book.com mac book pro => https://www.apple.com/shop/buy-mac/macbook-pro/15-inch-space-gray-2.6ghz-6-core-512gb# Aline a new closet => https://i.pinimg.com/474x/02/05/93/020593b34c205792a6a7fd7191333fc6--wardrobe-behind-bed-false-wall-wardrobe.jpg Fernanda mimikyu plush => https://www.pokemoncenter.com/mimikyu-poké-plush-%28standard-size%29---10-701-02831 camelia plush => https://farm9.static.flickr.com/8432/28947786492_80056225f3_b.jpg Sophia baby alive => https://www.target.com/p/baby-alive-face-paint-fairy-brunette/-/A-51304817 Now we have a new model Wishlist that refers to a table named wishlist. It has $!id as id$!name and $!link are columns, and there are some things new! has UInt $!wisher-id is referencing{ Person.id }; is the same as has UInt$!wisher-id is column{ :references{ Person.id } }; that means it’s a column that’s a foreign key that references the id Person‘s column. It also has a has Person $.wisher is relationship{ .wisher-id }; it’s not a column, it’s a “virtual” field. the$ sigil means that there is only 1 wisher for a wish. And is relationship expects a Callable that will receive a model. If it’s Scalar it will receive the current model as the only argument. So, in this case, it will be Wishlist. The return of the relationsip’s Callable must be a column that references some other column.

Lets see how this table is created:

As you can see, no wisher column is created.

The Person model has changed too! Now it has a @.wishes relationship (has Wishlist @.wishes is relationship{ .wisher-id }). It uses a @ sigil so each Person can have more than one wish. The Callable passed will receive the type of the Positional attribute (Wishlist on this case) and must return a column that references some other column.

The table created is the same as before.

We created a new Person as we did before: my \fernando = Person.^create: :name<Fernando>, :email<fco@aco.com>; and now we can use the relationship (wishes) to create a new wish (fernando.wishes.create: :name<Comma>, :link<https://commaide.com&gt;). That creates a new wish for Fernando running the following SQL:

Had you seen? wisher_id is 1… 1 is Fernando’s id. Once you have created the wish from Fernando’s .wishes(), it already knows that it belongs to Fernando.

And then we define wishes for every person we create.

Then we loop over every Person in the database (Person.^all) and print its name and loop over that person’s wishes and print its name and link.

Okey, we can save who is participating… get what they want… but the draw? Who should I give a gift to? To do that we change our program again:

Now Person has two new attributes ($!pair-id and$.pair) and a new method (draw). $!pair-id is a foreign key that references the field id on the same table (Person) so we have to use an alias (.^alias). The other one is the relationship ($.pair) that uses that foreign key.

The new method (draw) is where the magic happens. It uses the method .pick: * that on a normal Positional would shuffle the list. And it does the same here, with the query:

Once we have the shuffled list, we use .rotor to get two items and go one back, so we save what is the pair of each person giving to the next person, and the last person in the list will give to the first person.

And this is the output of our final code:

Fernando -> Sophia	Wishlist: baby aliveAline -> Fernanda	Wishlist: mimikyu plush, camelia plushFernanda -> Fernando	Wishlist: COMMA, perl6 books, mac book proSophia -> Aline	Wishlist: a new closet

As a bonus, let’s check out the track Red is going to follow. This is a current working code:

And this is the SQL it runs:

And prints:

Fernando => fco@aco.comAline => aja@aco.comFernandaSophia

## Perlgeek.de: Perl 6 Coding Contest 2019: Seeking Task Makers

I want to revive Carl Mäsak's Coding Contest as a crowd-sourced contest.

The contest will be in four phases:

• public contest, where you can solve the tasks
• public commenting on solutions
• grading and awarding of prizes

For the first phase, development of tasks, I am looking for volunteers who come up with coding tasks collaboratively. Sadly, these volunteers, including myself, will be excluded from participating in the second phase.

I am looking for tasks that ...

• are well-worded
• have a (private) implementation that shows it is solvable, preferably in under 500 lines of code
• have a few public tests to help the participants, and a more tests tests that help the graders and commenters later on

This is non-trivial, so I'd like to have others to discuss things with, and to come up with some more tasks.

If you want to help with task creation, please send an email to moritz.lenz@gmail.com, stating your intentions to help, and your freenode IRC handle (optional).

There are other ways to help too:

• You can pledge a prize (some ideas: Comma IDE subscription, books, Camelia plushies, Amazon vouchers)
• You can help design a small website for the contest
• You can help iron out the exact rules for the contest
• ... or any other way that I didn't think of :-)

In these cases you can use the same email address to contact me, or use IRC (moritz on freenode) or twitter.

## my Timotimo \this: Rakudo Perl 6 performance analysis tooling: Progress Report

I have started properly working on the grant later than I had hoped, and have not been able to invest as much time per week as I had initially planned. However, recently more and more parts of the tool have come together to form a useful product.

Recently, core developer Stefan 'nine' Seifert has been able to make good use of the tool to assist in analyzing and optimizing the performance of a re-write of MoarVM's bytecode compiler in NQP code. Here are quotes from Stefan on IRC, along with links to the public IRC logs:

https://colabti.org/irclogger/irclogger_log/perl6-dev?date=2018-10-09#l56

I now managed to get a profile of the MAST stage using a patched --profile-stage, but the profile is too large even for the QT frontend :/
timotimo++ # moarperf works beautifully with this huge profile :)


https://colabti.org/irclogger/irclogger_log/perl6-dev?date=2018-10-23#l66

timotimo++ # the profiler is indespensable in getting nqp-mbc merge worthy.


Below you can find a copy of the "Deliverables and Inchstones" section of the original grant proposal with explanations of the current status/progress and any additional features I've been able to implement, along with a couple of screenshots to illustrate what I mean.

The "web frontend for heap snapshot analyzer" and "bunch of examples" sections have had their sub-headings removed, because work on that area has not yet started.

# Deliverables and Inchstones

## A blog with progress reports.

There have been 6 reports, mostly due to a long starting delay and some additional intervention from Life Itself:

## A web frontend for the heap snapshot analyzer

I have pushed this off to the future, opting to prioritize the instrumented profiler frontend instead.

## A new web frontend for the instrumented profiler

• Communication between the web frontend and a backend for data queries
Some things are currently delivered via a "status message" websocket, others via GET queries. I might move a few of these from one mode to another, for example to make large results or multi-part results arrive in chunks, so that the UI may already display something before the entire data is available.

• Overview page with at-a-glance metrics for quick feedback for code changes
Just like the previous frontend, this page shows a whole bunch of high-level metrics about the program in general. These let you easily spot very big differences between two versions of the same code, or between the same code run by two different versions of rakudo.
The Threads section shows the total run time of the profiled code, and how long the specializer has been busy, but on top of that it also displays the number of threads still active when profiling finished, and also when these threads were started relative to the thread that initiated profiling.
The GC section has the total time spent running the GC, how often it ran minor and major collections, and the minimum, maximum, and average time of minor vs major collections.
The Call Frames section is equivalent to the old profiler's Call Frames part, except that it formats the numbers with separators for thousands and such, which makes it a lot more readable to my eyes.
The Dynamic Optimization section is also equivalent to the old profiler's Dynamic Optimization section.

• Call Graph explorer
The Call Graph Explorer is implemented and gives access to all threads that have run code. Navigating forwards is possible via a table of callees, and navigating backwards is possible via a breadcrumbs display, a dedicated "up" button near the callee list, and using the browser's back button or history. Likewise, navigating "forwards" is also possible using the browser's forwards button.
(see above)
• Icicle Graph with acceptable performance
I have not yet started this.
• Search function for Call Graph Explorer
There is now a search field above the list of callees. Entering parts of a routine's name will put a little right-pointing hand icon in front of the callees that directly or indirectly call a routine with that name.

• Routines List
The Routines List ("overview") is implemented and offers a good range of information on routines seen throughout the profile. In order to not make the browser too laggy, it initially only loads the first 100 entries, along with a button that lets the user request 100 more.
• Sortable by every sensible column
Currently some sorting options are not present: Sorting by "inclusive time per call" or "exclusive time per call" would probably be the most interesting, followed perhaps by "percentage jitted".
• Groupable by filename
I have not yet started on this. It's a low priority at the moment, unless someone asks to get it sooner.
• Expand a routine to see its callers or callees
Expanding a routine offers tabs with callers, callees, allocations, and "paths" (see next point)

• … with links directly to the call graph explorer
Since routines in the routine list encompass any amount of different places in the call graph, having just one link to the call graph explorer doesn't make sense. Instead, there is a tab that shows all paths from thread entries to the given routine. All of the entries in these paths are direct links into the call graph explorer.
• Bonus Features delivered
Clicking on a little icon next to a routine's name will jump to and highlight the routine in the routine list itself.

• Allocation explorer
The Allocation Explorer is implemented. It's very simple, but definitely useful. On top of giving you a list of all routines that allocate any given type, you can immediately look at a "paths" view like the Routines List has. In the future, it will also point out how many allocations came from which parts of the tree.
There's not yet anything to graphically show how much the types are "worth" in relation to each other, but compared to the previous profiler, it shows not only the number of allocations, but also the size of each object.
• Show what routines allocate a given class
Done. Includes links directly to the routine overview list.
• Expand a routine to see which of its callers is responsible for what amount of allocations
This feature is invaluable for situations where the allocating routine for a type is always just "method new". You can get more detailed information already using the "paths" view. Soon it will also tell you which paths are responsible for what fraction of total allocations.
• Link to the heap snapshot explorer for any given type
Since the heap snapshot explorer is going to be built in the second part of the grant, this is not yet applicable.
• Bonus Features delivered:
• Display type sizes
The display of types has been enhanced to show not only total number of allocations, but also size per object, size in total, and a hint for types that have varying amounts of extra data per object, like an array or hash for example.
Types that are from the core setting and don't look like internal classes have links to the documentation website next to them.
• OSR and Deopt explorer
This part is currently completely missing.
• Show routines that have been OSR'd (On-Stack-Replaced by the optimizer)
• Again, allowing to expand routines to see their callers
• Explain in clear terms what OSR and Deopt mean

• GC explorer
This part has been the most "interesting" to work on. Changes to moarvm had to be made, and a concept for displaying GC behavior across multiple threads was needed.
• Figure out a good way to present information from multi-threaded programs here
Currently, every entry in the "GC runs" table shows which threads (by ID) have participated in GC-ing, and expanding each row gives you timing information for each individual thread - specifically how long other threads took to join in, and how long each took.
• Expose time-of-collection info to profiler
Every row in the GC table has a "since start" and "since previous" entry.

• Show the time and duration of collections, how much data was kept, promoted to the old generation, or deleted, and whether the collection was a minor or major one
This is exposed in the expanded GC table rows.
• Filterable to show major collections only
The user can choose between "major only", "minor and major", or "minor only".
• Compactly graph collection times and kept/promoted/deleted amounts
There's bar charts at the very top for "time taken" and "time since start", and the choice of "combined", "split", and "relativized" graphs for the amounts of nursery data promoted/kept/freed of each gc run.

## User-facing documentation on using the profiling facilities and interpreting the data

This has not yet happened, except for the blog post for the release that explains how to run the profiler, and the README.

## Build a bunch of examples that show different performance issues and describe how to figure them out

There are currently big changes going on in MoarVM's optimizer relating to removing unnecessary boxing of intermediate values in calculations and such, which may make any examples moot real soon. I've delayed this part of the work a bit for that reason.

# In Summary ...

I was positively surprised when I last opened the original tasks list and saw what was already done, and what could be finished with just a little bit of work!

I hope you'll give moarperf a try some time soon and let me know what you think! Here's a link to the project on github: MoarPerf project on github

Thanks for staying around and reading my blog posts :)
- Timo

## Zoffix Znet: Perl 6 Advent Calendar 2018 Call for Authors

### Published on 2018-10-31T00:00:00

Write a blog post about Perl 6

## Zoffix Znet: A Request to Larry Wall to Create a Language Name Alias for Perl 6

### Published on 2018-10-07T00:00:00

The culmination of the naming discussion

## 6guts: Speeding up object creation

Recently, a Perl 6 object creation benchmark result did the rounds on social media. This Perl 6 code:

class Point {
has $.x; has$.y;
}
my $total = 0; for ^1_000_000 { my$p = Point.new(x => 2, y => 3);
$total =$total + $p.x +$p.y;
}
say $total;  Now (on HEAD builds of Rakudo and MoarVM) runs faster than this roughly equivalent Perl 5 code: use v5.10; package Point; sub new { my ($class, %args) = @_;
bless \%args, $class; } sub x { my$self = shift;
$self->{x} } sub y { my$self = shift;
$self->{y} } package main; my$total = 0;
for (1..1_000_000) {
my $p = Point->new(x => 2, y => 3);$total = $total +$p->x + $p->y; } say$total;


(Aside: yes, I know there’s no shortage of libraries in Perl 5 that make OO nicer than this, though those I tried also made it slower.)

This is a fairly encouraging result: object creation, method calls, and attribute access are the operational bread and butter of OO programming, so it’s a pleasing sign that Perl 6 on Rakudo/MoarVM is getting increasingly speedy at them. In fact, it’s probably a bit better at them than this benchmark’s raw numbers show, since:

• The measurements include startup time. Probably 10%-15% of the time Rakudo spends executing the program is startup, parsing, compilation, etc. (noting the compiler is itself written in Perl 6). By contrast, Perl 5 has really fast startup and a parser written in C.
• The math in Perl 6 is arbitrary precision (that is, will upgrade to a big integer if needed); it costs something to handle that, even if it doesn’t happen
• Every math operation allocates a new Int object; there’s two uses of + here, so that means two new Int allocations per iteration of the loop, which then have to be garbage collected

While dealing with Int values got faster recently, it’s still really making two Int objects every time around that loop and having to GC them later. An upcoming new set of analyses and optimizations will let us get rid of that cost too. And yes, startup will get faster with time also. In summary, while Rakudo/MoarVM is now winning that benchmark against Perl 5, there’s still lots more to be had. Which is a good job, since the equivalent Python and Ruby versions of that benchmark still run faster than the Perl 6 one.

In this post, I’ll look at the changes that allowed this benchmark to end up faster. None of the new work was particularly ground-breaking; in fact, it was mostly a case of doing small things to make better use of optimizations we already had.

### What happens during construction?

Theoretically, the default new method in turn calls bless, passing the named arguments along. The bless method then creates an object instance, followed by calling BUILDALL. The BUILDALL method goes through the set of steps needed for constructing the object. In the case of a simple object like ours, that involves checking if an x and y named argument were passed, and if so assigning those values into the Scalar containers of the object attributes.

For those keeping count, that’s at least 3 method calls (newbless, and BUILDALL).

However, there’s a cheat. If bless wasn’t overridden (which would be an odd thing to do anyway), then the default new could just call BUILDALL directly anyway. Therefore, new looks like this:

multi method new(*%attrinit) {
nqp::if(
(my $bless := nqp::findmethod(self,'bless')), nqp::findmethod(Mu,'bless') ), nqp::create(self).BUILDALL(Empty, %attrinit),$bless(|%attrinit)
)
}


The BUILDALL method was originally a little “interpreter” that went through a per-object build plan stating what needs to be done. However, for quite some time now we’ve instead compiled a per-class BUILDPLAN method.

To figure out how to speed this up, I took a look at how the specializer was handling the code. The answer: not so well. There were certainly some positive moments in there. Of note:

• The x and y accessor methods were being inlined
• The + operators were being inlined
• Everything was JIT-compiled into machine code

However, the new method was getting only a “certain” specialization, which is usually only done for very heavily polymorphic code. That wasn’t the case here; this program clearly constructs overwhelmingly one type of object. So what was going on?

In order to produce an “observed type” specialization – the more powerful kind – it needs to have data on the types of all of the passed arguments. And for the named arguments, that was missing. But why?

Logging of passed argument types is done on the callee side, since:

• If the callee is already specialized, then we don’t want to waste time in the caller doing throwaway logging
• If the callee is not specialized, but the caller is, then we don’t want to miss out on logging types (and we only log when running unspecialized code)

The argument logging was done as the interpreter executed each parameter processing instruction. However, when there’s a slurpy, then it would just swallow up all the remaining arguments without logging type information. Thus the information about the argument types was missing and we ended up with a less powerful form of specialization.

Teaching the slurpy handling code about argument logging felt a bit involved, but then I realized there was a far simpler solution: log all of the things in the argument buffer at the point an unspecialized frame is being invoked. We’re already logging the entry to the call frame at that point anyway. This meant that all of the parameter handling instructions got a bit simpler too, since they no longer had logging to do.

### Conditional elimination

Having new being specialized in a more significant way was an immediate win. Of note, this part:

      nqp::eqaddr(
(my $bless := nqp::findmethod(self,'bless')), nqp::findmethod(Mu,'bless') ),  Was quite interesting. Since we were now specializing on the type of self, then the findmethod could be resolved into a constant. The resolution of a method on the constant Mu was also a constant. Therefore, the result of the eqaddr (same memory address) comparison of two constants should also have been turned into a constant…except that wasn’t happening! This time, it was simple: we just hadn’t implemented folding of that yet. So, that was an easy win, and once done meant the optimizer could see that the if would always go a certain way and thus optimize the whole thing into the chosen branch. Thus the new method was specialized into something like: multi method new(*%attrinit) { nqp::create(self).BUILDALL(Empty, %attrinit), }  Further, the create could be optimized into a “fast create” op, and the relatively small BUILDALL method could be inlined into new. Not bad. ### Generating simpler code At this point, things were much better, but still not quite there. I took a look at the BUILDALL method compilation, and realized that it could emit faster code. The %attrinit is a Perl 6 Hash object, which is for the most part a wrapper around the lower-level VM hash object, which is the actual hash storage. We were, curiously, already pulling this lower-level hash out of the Hash object and using the nqp::existskey primitive to check if there was a value, but were not doing that for actually accessing the value. Instead, an .AT-KEY('x') method call was being done. While that was being handled fairly well – inlined and so forth – it also does its own bit of existence checking. I realized we could just use the nqp::atkey primitive here instead. Later on, I also realized that we could do away with nqp::existskey and just use nqp::atkey. Since a VM-level null is something that never exists in Perl 6, we can safely use it as a sentinel to mean that there is no value. That got us down to a single hash lookup. By this point, we were just about winning the benchmark, but only by a few percent. Were there any more easy pickings? ### An off-by-one My next surprise was that the new method didn’t get inlined. It was within the size limit. There was nothing preventing it. What was going on? Looking closer, it was even worse than that. Normally, when something is too big to be inlined, but we can work out what specialization will be called on the target, we do “specialization linking”, indicating which specialization of the code to call. That wasn’t happening. But why? Some debugging later, I sheepishly fixed an off-by-one in the code that looks through a multi-dispatch cache to see if a particular candidate matches the set of argument types we have during optimization of a call instruction. This probably increased the performance of quite a few calls involving passing named arguments to multi-methods – and meant new was also inlined, putting us a good bit ahead on this benchmark. ### What next? The next round of performance improvements – both to this code and plenty more besides – will come from escape analysis, scalar replacement, and related optimizations. I’ve already started on that work, though expect it will keep me busy for quite a while. I will, however, be able to deliver it in stages, and am optimistic I’ll have the first stage of it ready to talk about – and maybe even merge – in a week or so. ## brrt to the future: A future for fork(2) ### Published by Bart Wiegmans on 2018-10-04T05:18:00 Hi hackers. Today I want to write about a new functionality that I've been developing for MoarVM that has very little to do with the JIT compiler. But it is still about VM internals so I guess it will fit. Many months ago, jnthn wrote a blog post on the relation between perl 5 and perl 6. And as a longtime and enthusiastic perl 5 user - most of the JIT's compile time support software is written in perl 5 for a reason - I wholeheartedly agree with the 'sister language' narrative. There is plenty of room for all sorts of perls yet, I hope. Yet one thing kept itching me: Moreover, it’s very much the case that Perl 5 and Perl 6 make different trade-offs. To pick one concrete example, Perl 6 makes it easy to run code across multiple threads, and even uses multiple threads internally (for example, performing optimization and JIT compilation on a background thread). Which is great…except the only winning move in a game involving both threads and fork() is not to play. Sometimes one just can’t have their cake and eat it, and if you’re wanting a language that more directly gives you your POSIX, that’s probably always going to be a strength of Perl 5 over Perl 6. (Emphasis mine). You see, I had never realized that MoarVM couldn't implement fork(), but it's true. In POSIX systems, a fork()'d child process inherits the full memory space, as-is, from its parent process. But it inherits only the forking thread. This means that any operations performed by any other thread, including operations that might need to be protected by a mutex (e.g. malloc()), will be interrupted and unfinished (in the child process). This can be a problem. Or, in the words of the linux manual page on the subject:  * The child process is created with a single thread—the one that called fork(). The entire virtual address space of the parent is replicated in the child, including the states of mutexes, condition variables, and other pthreads objects; the use of pthread_atfork(3) may be helpful for dealing with problems that this can cause. * After a fork() in a multithreaded program, the child can safely call only async-signal-safe functions (see signal-safety(7)) until such time as it calls execve(2). Note that the set of signal-safe functions is not that large, and excludes all memory management functions. As noted by jnthn, MoarVM is inherently multithreaded from startup, and will happily spawn as many threads as needed by the program. It also uses malloc() and friends rather a lot. So it would seem that perl 6 cannot implement POSIX fork() (in which both parent and child program continue from the same position in the program) as well as perl 5 can. I was disappointed. As a longtime (and enthusiastic) perl 5 user, fork() is one of my favorite concurrency tools. Its best feature is that parent and child processes are isolated by the operating system, so each can modify its own internal state without concern for concurrency,. This can make it practical to introduce concurrency after development, rather than designing it in from the start. Either process can crash while the other continues. It is also possible (and common) to run a child process with reduced privileges relative to the parent process, which isn't possible with threads. And it is possible to prepare arbitrarily complex state for the child process, unlike spawn() and similar calls. Fortunately all hope isn't necessarily lost. The restrictions listed above only apply if there are multiple threads active at the moment that fork() is executed. That means that if we can stop all threads (except for the one planning to fork) before actually calling fork(), then the child process can continue safely. That is well within the realm of possibility, at least as far as system threads are concerned. The process itself isn't very exciting to talk about, actually - it involves sending stop signals to system threads, waiting for these threads to join, verifying that the forking thread is the really only active thread, and restarting threads after fork(). Because of locking, it is a bit subtle (tbere may be another user thread that is also trying to fork), but not actually very hard. When I finally merged the code, it turned out that an ancient (and forgotten) thread list modification function was corrupting the list by not being aware of generational garbage collection... oops. But that was simple enough to fix (thanks to timotimo++ for the hint). And now the following oneliner should work on platforms that support fork():(using development versions of MoarVM and Rakudo): perl6 -e 'use nqp; my$i = nqp::fork(); say $i;' The main limitation of this work is that it won't work if there are any user threads active. (If there are any, nqp::fork() will throw an exception). The reason why is simple: while it is possible to adapt the system threads so that I can stop them on demand, user threads may be doing arbitrary work, hold arbitrary locks and may be blocked (possibly indefinitely) on a system call. So jnthn's comment at the start of this post still applies - threads and fork() don't work together. In fact, many programs may be better off with threads. But I think that languages in the perl family should let the user make that decision, rather than the VM. So I hope that this will find some use somewhere. If not, it was certainly fun to figure out. Happy hacking! PS: For the curious, I think there may in fact be a way to make fork() work under a multithreaded program, and it relies on the fact that MoarVM has a multithreaded garbage collector. Basically, stopping all threads before calling fork() is not so different from stopping the world during the final phase of garbage collection. And so - in theory - it would be possible to hijack the synchronization mechanism of the garbage collector to pause all threads. During interpretation, and in JIT compiled code, each thread periodically checks if garbage collection has started. If it has, it will synchronize with the thread that started GC in order to share the work. Threads that are currently blocked (on a system call, or on acquiring a lock) mark themselves as such, and when they are resumed always check the GC status. So it is in fact possible to force MoarVM into a single active thread even with multiple user threads active. However, that still leaves cleanup to deal with, after returning from fork() in the child process. Also, this will not work if a thread is blocked on NativeCall. Altogether I think abusing the GC in order to enable a fork() may be a bit over the edge of insanity :-) ## 6guts: Eliminating unrequired guards ### Published by jnthnwrthngtn on 2018-09-29T19:59:28 MoarVM’s optimizer can perform speculative optimization. It does this by gathering statistics as the program is interpreted, and then analyzing them to find out what types and callees typically show up at given points in the program. If it spots there is at least a 99% chance of a particular type showing up at a particular program point, then it will optimize the code ahead of that point as if that type would always show up. Of course, statistics aren’t proofs. What about the 1% case? To handle this, a guard instruction is inserted. This cheaply checks if the type is the expected one, and if it isn’t, falls back to the interpreter. This process is known as deoptimization. ### Just how cheap are guards? I just stated that a guard cheaply checks if the type is the expected one, but just how cheap is it really? There’s both direct and indirect costs. The direct cost is that of the check. Here’s a (slightly simplified) version of the JIT compiler code that produces the machine code for a type guard. /* Load object that we should guard */ | mov TMP1, WORK[obj]; /* Get type table we expect and compare it with the object's one */ MVMint16 spesh_idx = guard->ins->operands[2].lit_i16; | get_spesh_slot TMP2, spesh_idx; | cmp TMP2, OBJECT:TMP1->st; | jne >1; /* We're good, no need to deopt */ | jmp >2; |1: /* Call deoptimization handler */ | mov ARG1, TC; | mov ARG2, guard->deopt_offset; | mov ARG3, guard->deopt_target; | callp &MVM_spesh_deopt_one_direct; /* Jump out to the interpreter */ | jmp ->exit; |2:  Where get_spesh_slot is a macro like this: |.macro get_spesh_slot, reg, idx; | mov reg, TC->cur_frame; | mov reg, FRAME:reg->effective_spesh_slots; | mov reg, OBJECTPTR:reg[idx]; |.endmacro  So, in the case that the guard matches, it’s 7 machine instructions (note: it’s actually a couple more because of something I omitted for simplicity). Thus there’s the cost of the time to execute them, plus the space they take in memory and, especially, the instruction cache. Further, one is a conditional jump. We’d expect it to be false most of the time, and so the CPU’s branch predictor should get a good hit rate – but branch predictor usage isn’t entirely free of charge either. Effectively, it’s not that bad, but it’s nice to save the cost if we can. The indirect costs are much harder to quantify. In order to deoptimize, we need to have enough state to recreate the world as the interpreter expects it to be. I wrote on this topic not so long ago, for those who want to dive into the detail, but the essence of the problem is that we may have to retain some instructions and/or forgo some optimizations so that we are able to successfully deoptimize if needed. Thus, the presence of a guard constrains what optimizations we can perform in the code around it. ### Representation problems A guard instruction in MoarVM originally looked like: sp_guard r(obj) sslot uint32  Where r(obj) is an object register to read containing the object to guard, the sslot is a spesh slot (an entry in a per-block constant table) containing the type we expect to see, and the uint32 indicates the target address after we deoptimize. Guards are inserted after instructions for which we had gathered statistics and determined there was a stable type. Things guarded include return values after a call, reads of object attributes, and reads of lexical variables. This design has carried us a long way, however it has a major shortcoming. The program is represented in SSA form. Thus, an invoke followed by a guard might look something like: invoke r6(5), r4(2) sp_guard r6(5), sslot(42), litui32(64)  Where r6(5) has the return value written into it (and thus is a new SSA version of r6). We hold facts about a value (if it has a known type, if it has a known value, etc.) per SSA version. So the facts about r6(5) would be that it has a known type – the one that is asserted by the guard. The invoke itself, however, might be optimized by performing inlining of the callee. In some cases, we might then know the type of result that the inlinee produces – either because there was a guard inside of the inlined code, or because we can actually prove the return type! However, since the facts about r6(5) were those produced by the guard, there was no way to talk about what we know of r6(5) before the guard and after the guard. More awkwardly, while in the early days of the specializer we only ever put guards immediately after the instructions that read values, more recent additions might insert them at a distance (for example, in speculative call optimizations and around spesh plugins). In this case, we could not safely set facts on the guarded register, because those might lead to wrong optimizations being done prior to the guard. ### Changing of the guards Now a guard instruction looks like this: sp_guard w(obj) r(obj) sslot uint32  Or, concretely: invoke r6(5), r4(2) sp_guard r6(6), r6(5), sslot(42), litui32(64)  That is to say, it introduces a new SSA version. This means that we get a way to talk about the value both before and after the guard instruction. Thus, if we perform an inlining and we know exactly what type it will return, then that type information will flow into the input – in our example, r6(5) – of the guard instruction. We can then notice that the property the guard wants to assert is already upheld, and replace the guard with a simple set (which may itself be eliminated by later optimizations). This also solves the problem with guards inserted away from the original write of the value: we get a new SSA version beyond the guard point. This in turn leads to more opportunities to avoid repeated guards beyond that point. Quite a lot of return value guards on common operations simply go away thanks to these changes. For example, in $a + $b, where $a and b are Int, we will be able to inline the + operator, and we can statically see from its code that it will produce an Int. Thus, the guard on the return type in the caller of the operator can be eliminated. This saves the instructions associated with the guard, and potentially allows for further optimizations to take place since we know we’ll never deoptimize at that point. ### In summary MoarVM does lots of speculative optimization. This enables us to optimize in cases where we can’t prove a property of the program, but statistics tell us that it mostly behaves in a certain way. We make this safe by adding guards, and falling back to the general version of the code in cases where they fail. However, guards have a cost. By changing our representation of them, so that we model the data coming into the guard and after the guard as two different SSA versions, we are able to eliminate many guard instructions. This not only reduces duplicate guards, but also allows for elimination of guards when the broader view afforded by inlining lets us prove properties that we weren’t previously able to. In fact, upcoming work on escape analysis and scalar replacement will allow us to start seeing into currently opaque structures, such as Scalar containers. When we are able to do that, then we’ll be able to prove further program properties, leading to the elimination of yet more guards. Thus, this work is not only immediately useful, but also will help us better exploit upcoming optimizations. ## 6guts: Faster box/unbox and Int operations ### Published by jnthnwrthngtn on 2018-09-28T22:43:55 My work on Perl 6 performance continues, thanks to a renewal of my grant from The Perl Foundation. I’m especially focusing on making common basic operations faster, the idea being that if those go faster than programs composed out of them also should. This appears to be working out well: I’ve not been directly trying to make the Text::CSV benchmark run faster, but that’s resulted from my work. I’ll be writing a few posts in on various of the changes I’ve done. This one will take a look at some related optimizations around boxing, unboxing, and common mathematical operations on Int. ### Boxing and unboxing Boxing is taking a natively typed value and wrapping it into an object. Unboxing is the opposite: taking an object that wraps a native value and getting the native value out of it. In Perl 6, these are extremely common. Num and Str are boxes around num and str respectively. Thus, unless dealing with natives explicitly, working with floating point numbers and strings will involve lots of box and unbox operations. There’s nothing particularly special about Num and Str. They are normal objects with the P6opaque representation, meaning they can be mixed into. The only thing that makes them slightly special is that they have attributes marked as being a box target. This indicates the attribute out as being the one to write to or read from in a box or unbox operation. Thus, a box operations is something like: • Create an instance of the box type • Find out where in that object to write the value to • Write the value there While unbox is: • Find out where in the object to read a value from • Read it from there ### Specialization of box and unbox box is actually two operations: an allocation and a store. We know how to fast-path allocations and JIT them relatively compactly, however that wasn’t being done for box. So, step one was to decompose this higher-level op into the allocation and the write into the allocated object. The first step could then be optimized in the usual way allocations are. In the unspecialized path, we first find out where to write the native value to, and then write it. However, when we’re producing a specialized version, we almost always know the type we’re boxing into. Therefore, the object offset to write to can be calculated once, and a very simple instruction to do a write at an offset into the object produced. This JITs extremely well. There are a couple of other tricks. Binds into a P6opaque generally have to check that the object wasn’t mixed in to, however since we just allocated it then we know that can’t be the case and can skip that check. Also, a string is a garbage-collectable object, and when assigning one GC-able object into another one, we need to perform a write barrier for the sake of generational GC. However, since the object was just allocated, we know very well that it is in the nursery, and so the write barrier will never trigger. Thus, we can omit it. Unboxing is easier to specialize: just calculate the offset, and emit a simpler instruction to load the value from there. I’ve also started some early work (quite a long way from merge) on escape analysis, which will allow us to eliminate many box object allocations entirely. It’s a great deal easier to implement this if allocations, reads, and writes to an object have a uniform representation. By lowering box and unbox operations into these lower level operations, this eases the path to implementing escape analysis for them. ### What about Int? Some readers might have wondered why I talked about Num and Str as examples of boxed types, but not Int. It is one too – but there’s a twist. Actually, there’s two twists. The first is that Int isn’t actually a wrapper around an int, but rather an arbitrary precision integer. When we first implemented Int, we had it always use a big integer library. Of course, this is slow, so later on we made it so any number fitting into a 32-bit range would be stored directly, and only allocate a big integer structure if it’s outside of this range. Thus, boxing to a big integer means range-checking the value to box. If it fits into the 32-bit range, then we can write it directly, and set the flag indicating that it’s a small Int. Machine code to perform these steps is now spat out directly by the JIT, and we only fall back to a function call in the case where we need a big integer. Once again, the allocation itself is emitted in a more specialized way too, and the offset to write to is determined once at specialization time. Unboxing is similar. Provided we’re specializing on the type of the object to unbox, then we calculate the offset at specialization time. Then, the JIT produces code to check if the small Int flag is set, and if so just reads and sign extends the value into a 64-bit register. Otherwise, it falls back to the function call to handle the real big integer case. For boxing, however, there was a second twist: we have a boxed integer cache, so for small integers we don’t have to repeatedly allocate objects boxing them. So boxing an integer is actually: 1. Check if it’s in the range of the box cache 2. If so, return it from the cache 3. Otherwise, do the normal boxing operation When I first did these changes, I omitted the use of the box cache. It turns out, however, to have quite an impact in some programs: one benchmark I was looking at suffered quite a bit from the missing box cache, since it now had to do a lot more garbage collection runs. So, I reinstated use of the cache, but this time with the JIT doing the range checks in the produced machine code and reading directly out of the cache in the case of a hit. Thus, in the cache hit case, we now don’t even make a single function call for the box operation. ### Faster Int operations One might wonder why we picked 32-bit integers as the limit for the small case of a big integer, and not 64-bit integers. There’s two reasons. The most immediate is that we can then use the 32 bits that would be the lower 32 of a 64-bit pointer to the big integer structure as our “this is a small integer” flag. This works reliably as pointers are always aligned to at least a 4-byte boundary, so a real pointer to a big integer structure would never have the lowest bits set. (And yes, on big-endian platforms, we swap the order of the flag and the value to account for that!) The second reason is that there’s no portable way in C to detect if a calculation overflowed. However, out of the basic math operations, if we have two inputs that fit into a 32-bit integer, and we do them at 64-bit width, we know that the result can never overflow the 64-bit integer. Thus we can then range check the result and decide whether to store it back into the result object as 32-bit, or to store it as a big integer. Since Int is immutable, all operations result in a newly allocated object. This allocation – you’ll spot a pattern by now – is open to being specialized. Once again, finding the boxed value to operate on can also be specialized, by calculating its offset into the input objects and result object. So far, so familiar. However, there’s a further opportunity for improvement if we are JIT-compiling the operations to machine code: the CPU has flags for if the last operation overflowed, and we can get at them. Thus, for two small Int inputs, we can simply: 1. Grab the values 2. Do the calculation at 32-bit width 3. Check the flags, and store it into the result object if no overflow 4. If it overflowed, fall back to doing it wider and storing it as a real big integer I’ve done this for addition, subtraction, and multiplication. Those looking for a MoarVM specializer/JIT task might like to consider doing it for some of the other operations. :-) ### In summary Boxing, unboxing, and math on Int all came with various indirections for the sake of generality (coping with mixins, subclassing, and things like IntStr). However, when we are producing type-specialized code, we can get rid of most of the indirections, resulting in being able to perform them faster. Further, when we JIT-compile the optimized result into machine code, we can take some further opportunities, reducing function calls into C code as well as taking advantage of access to the overflow flags. ## samcv: Adding and Improving File Encoding Support in MoarVM ### Published on 2018-09-26T07:00:00 Encodings. They may seem to some as horribly boring and bland. Nobody really wants to have to worry about the details. Encodings are how we communicate, transmit data. Their purpose is to be understood by others. When they work, nobody should even know they are there. When they don’t — and everyone can probably remember a time when they have tried to open a poorly encoded or corrupted file and been disappointed — they cannot be ignored. Here I talk about the work I have done in the past and work still in progress to improve the support of current encodings, add new encodings, and add new features and new options to allow Perl 6 and MoarVM to support more encodings and in a way which better achieves the goals encodings were designed to solve. Table of Contents ## Which Encodings Should We Add? I started looking at the most common encodings on the Web. We supported the two most common, UTF-8 and ISO-8859-1 but we did not support windows-1251 (Cyrillic) or ShiftJIS. This seemed like the makings of a new project so I got to work. I decided to start with windows-1251 since it was an 8 bit encoding while Shift-JIS was a variable length one or two byte encoding (the shift in the name comes from the second byte 'shift'). While the encoding itself was simpler than Shift-JIS, I was soon realizing some issues with both windows-1251 and our already supported windows-1252 encoding. One of the first major issues I found in windows-1252, you could create files which could not be decoded by other programs. An example of this is codepoint 0x81 (129) which does not have a mapping in the windows-1252 specification. This would cause many programs to not detect the file as windows-1252, or to fail saying the file was corrupted and could not be decoded. While our non-8bit encodings would throw an error if asked to encode text which did not exist in that codemapping, our 8 bit encodings would happily pass through invalid codepoints, as long as they fit in 8 bits. As I said at the start, encodings are a way for us to communicate, to exchange data and to be understood. This to me indicated a major failing. While the solution could be to never attempt to write codepoints that don’t exist in the target encoding, that was not an elegant solution and would just be a workaround. To remedy this I had to add new ops, such as decodeconf to replace decode op, and encodeconf to replace encode (plus many others). This would allow us to specify a configuration for our encodings and allow Perl 6 to tell MoarVM if it wanted to encode strictly according to the standard or to be more lenient. I added a new :strict option to open, encode and a few others to allow you to specify if it should be encoding strict or not strict. In the needs of backwards compatibility it still defaults to leniently encoding and decoding. Strict is planned to become the default for Perl 6 specification 6.e. ## Replacements Some of our other encodings had support for 'replacements'. If you tried to encode something that would not fit in the target encoding, this allowed you to supply a string that could be one or more characters which would but substituted instead of causing MoarVM to throw an exception. Once I had the strictness nailed down I was able to add support for replacements so one could have the ability to write data in a strict mode while still ensuring all compatible characters would get written properly, and you did not have to choose between unstrict and writing incompatible files and strict and having the encoding or file write failing when you really needed it not to. ## Shift-JIS Encodings are not without difficulty. As with the previous encodings I talked about, Shift-JIS would not be without decisions that had to be made. With Shift-JIS, the question became, "which Shift-JIS?". You see, there are dozens of different extensions to Shift-JIS, in addition to original Shift-JIS encoding. As a variable length encoding, most of the one byte codepoints had been taken while there were many hundreds of open and unallocated codepoints in the two byte range. This saw the wide proliferation of manufacturer specific extensions which other manufacturers may adopt while adding their own extensions to the format. Which of the dozens of different Shift-JIS encodings should we support? I eventually decided on windows-932 because that is the standard that is used by browsers when they encounter Shift-JIS text and encompasses a great many symbols. It is the most widely used Shift-JIS format. This would allow us to support the majority of Shift-JIS encoded documents out there, and the one which web browsers use on the web. Most but not all the characters in it are compatible with the other extensions, so it seemed like it was the sensible choice. The only notable exception to this is that windows-932 is not totally compatible with the original Shift-JIS encoding. While the original windows-932 encoding (and some extensions to it) map the ASCII’s backslash to the yen symbol ¥, windows-932 keeps it to the backslash. This was to retain better compatibility with ASCII and other formats. ## UTF-16 While MoarVM had a UTF-16 encoder and decoder, it was not fully featured. • You could encode a string into a utf16 buffer: • "hello".encode('utf16') #> utf16:0x<68 65 6c 6c 6f> • You could decode a utf16 buffer into a string: • utf16.new(0x68, 0x65, 0x6C, 0x6C, 0x6F).decode('utf16') #> hello That was about it. You couldn’t open a filehandle as UTF-16 and write to it. You couldn’t even do fh.write: "hello".encode('utf16') as the write function did not know how to deal with writing a 16 bit buffer (it expected an 8-bit buffer).

In addition there was no streaming UTF-16 decoder, so there was no way to read a UTF-16 file. So I set out to work, first allowing us to write 16-bit buffers to a file and then being able to .print a filehandle and write text to it.

At this point I knew that I would have to confront the BOM in the room, but decided to first implement the streaming decoder and ensure all of our file handle operations worked.

### The BOM in the Room

You may be noticing a theme here. You do not just implement an encoding. It does not exist in a vacuum. While there may be standards, there may be multiple standards. And those standards may or may not be followed in practice or it may not be totally straightforward.

Endianess defines which order the bytes are in a number. Big endian machines will store a 16-bit number with the larger section first, similar to how we write numbers, while little endian machines will put the smallest section first. This only matters for encoding numbers that are more than one byte. UTF-16 can be either big endian or little endian. Meaning big and little endian files will contain the same bytes, but the order is different.

Since swapping the two bytes of a 16-bit integer causes a different integer instead of an invalid one, the creators of UTF-16 knew it was not possible to determine with certainty the endianess of a 16 bit number. And so the byte order mark was created, a codepoint that would be added at the start of a file to signify which endianess the file was. Since they didn’t want to break already existing software, they used a "Zero width no-break space" (ZWNBSP) and designated that a ZWNBSP with bytes reversed would be invalid in UTF-16, a "non-character".

If the BOM was not removed, it would not be visible since it is zero width. If your program opens a file and sees a byte order mark it knew it was in the correct endianess. If it’s the "non-character" then it knew it had to swap the bytes when it read the file.

The standard states that utf16 should always use a BOM and that when reading a file as utf16 the BOM should be used to detect the endianess. It states the BOM is not passed through as it is assumed to not be part of the actual data. If it doesn’t exist then the system is supposed to decode as the endianess that the context suggests. So you get a format where you may lose the first codepoint of your file if that codepoint happens to be a zero width non-break space.

For utf16be (big endian) and utf16le (little endian) the standard states the byte order mark should not exist, and that any zero width no-break space at the start of a file is just that, and should be passed through.

Standards and practice are not identical and in practice many programs will remove a leading zero width non-break space even if set explicitly to utf16be or utf16le. I was unsure which route I should take. I looked at how Perl 5 handled it and after thinking for a long time I decided that we would follow the spec. Even though other software will get rid of any detected byte order mark, I think it’s important that if a user specifies utf16be or utf16le, they will always get back the same codepoints that they write to a file.

## Current Work

I am currently working on adding support so that when you write a file in 'utf16' it will add a byte order mark, as utf16 is supposed to always have one. Until this is added you can write a file with Perl 6 on one computer using the 'utf16' encoding which will not decode correctly using the same 'utf16' setting if the two computers are of different endianess.

Since there was no functionality for writing utf-16 to a file previously, there should be no backwards compatibility issues to worry about there and we can set it to add a byte order mark if the utf16 encoder is used.

## Release 2018.09

In the last release utf16, utf16le and utf16be should work, aside from utf16 not adding a byte order mark on file write. Anyone who uses utf16 and finds any issues or comments are free to email me or make an issue on the MoarVM or Rakudo issue trackers.

## Zoffix Znet: The 100 Day Plan: The Update on Perl 6.d Preparations

### Published on 2018-08-09T00:00:00

Info on how 6.d release prep is going

## Zoffix Znet: Introducing: Perl 6 Marketing Assets Web App

### Published on 2018-08-05T00:00:00

Get your Perl 6 flyers and brochures

## Zoffix Znet: Introducing: Newcomer Guide to Contributing to Core Perl 6

### Published on 2018-08-02T00:00:00

Info on the new guide for newcomers

## 6guts: Redesigning Rakudo’s Scalar

What’s the most common type your Perl 6 code uses? I’ll bet you that in most programs you write, it’ll be Scalar. That might come as a surprise, because you pretty much never write Scalar in your code. But in:

my $a = 41; my$b = $a + 1;  Then both $a and $b point to Scalar containers. These in turn hold the Int objects. Contrast it with: my$a := 42;
my $b :=$a + 1;


Where there are no Scalar containers. Assignment in Perl 6 is an operation on a container. Exactly what it does depending on the type of the container. With an Array, for example, it iterates the data source being assigned, and stores each value into the target Array. Assignment is therefore a copying operation, unlike binding which is a referencing operation. Making assignment the shorter thing to type makes it more attractive, and having the more attractive thing decrease the risk of action at a distance is generally a good thing.

Having Scalar be first-class is used in a number of features:

• Lazy vivification, so if %a{$x} { ... } will not initialize the hash slot in question, but %a{$x} = 42 will do so (this also works many levels deep)
• The is rw trait on parameters being able to work together with late-bound dispatch
• Making l-value routines possible, including every is rw accessor
• List assignment
• Using meta-ops on assignment, for example Z=

And probably some more that I forgot. It’s powerful. It’s also torture for those of us building Perl 6 implementations and trying to make them run fast. The frustration isn’t so much the immediate cost of the allocating all of those Scalar objects – that of course costs something, but modern GC algorithms can throw away short-lived objects pretty quickly – but also because of the difficulties it introduces for program analysis.

Despite all the nice SSA-based analysis we do, tracking the contents of Scalar containers is currently beyond that. Rather than any kind of reasoning to prove properties about what a Scalar holds, we instead handle it through statistics, guards, and deoptimization at the point that we fetch a value from a Scalar. This still lets us do quite a lot, but it’s certainly not ideal. Guards are cheap, but not free.

Over the course of my current grant from The Perl Foundation, I’ve been working out a roadmap for doing better with optimization in the presence of Scalar containers. Their presence is one of the major differences between full Perl 6 and the restricted NQP (Not Quite Perl), and plays a notable part in the performance difference between the two.

I’ve taken the first big step towards improving this situation by significantly re-working the way Scalar containers are handled. I’ll talk about that in this post, but first I’d like to provide an idea of the overall direction.

In the early days of MoarVM, when we didn’t have specialization or compilation to machine code, it made sense to do various bits of special-casing of Scalar. As part of that, we wrote code handling common container operations in C. We’ve by now reached a point where the C code that used to be a nice win is preventing us from performing the analyses we need in order to do better optimizations. At the end of the day, a Scalar container is just a normal object with an attribute $!value that holds its value. Making all operations dealing with Scalar container really be nothing more than some attribute lookups and binds would allow us to solve the problem in terms of more general analyses, which stand to benefit many other cases where programs use short-lived objects. The significant new piece of analysis we’ll want to do is escape analysis, which tells us which objects have a lifetime bounded to the current routine. We understand “current routine” to incorporate those that we have inlined. If we know that an object’s usage lies entirely within the current routine, we can then perform an optimization known as scalar replacement, which funnily enough has nothing much to do with Scalar in the Perl 6 sense, even if it solves the problems we’re aiming to solve with Scalar! The idea is that we allocate a local variable inside of the current frame for each attribute of the object. This means that we can then analyze them like we analyze other local variables, subject them to SSA, and so forth. This for one gets rid of the allocation of the object, but also lets us replace attribute lookups and binds with a level of indirection less. It will also let us reason about the contents of the once-attributes, so that we can eliminate guards that we previously inserted because we only had statistics, not proofs. So, that’s the direction of travel, but first, Scalar and various operations around it needed to change. ### Data structure redesign Prior to my recent work, a Scalar looked something like: class Scalar { has$!value;        # The value in the Scalar
has $!descriptor; # rw-ness, type constraint, name has$!whence;       # Auto-vivification closure
}


The $!descriptor held the static information about the Scalar container, so we didn’t have to hold it in every Scalar (we usually have many instances of the same “variable” over a programs lifetime). The $!whence was used when we wanted to do some kind of auto-vivification. The closure attached to it was invoked when the Scalar was assigned to, and then cleared afterwards. In an array, for example, the callback would bind the Scalar into the array storage, so that element – if assigned to – would start to exist in the array. There are various other forms of auto-vivification, but they all work in roughly the same way.

This works, but closures aren’t so easy for the optimizer to deal with (in short, a closure has to have an outer frame to point to, and so we can’t inline a frame that takes a closure). Probably some day we’ll find a clever solution to that, but since auto-vivification is an internal mechanism, we may as well make it one that we can see a path to making efficient in the near term future.

So, I set about considering alternatives. I realized that I wanted to replace the $!whence closure with some kind of object. Different types of object would do different kinds of vivification. This would work very well with the new spesh plugin mechanism, where we can build up a set of guards on objects. It also will work very well when we get escape analysis in place, since we can then potentially remove those guards after performing scalar replacement. Thus after inlining, we might be able to remove the “what kind of vivification does this assignment cause” checking too. So this seemed workable, but then I also realized that it would be possible to make Scalar smaller by: • Placing the new auto-vivification objects in the $!descriptor slot instead
• Having the vivification objects point to the original descriptor carrying the name, type, etc.
• Upon first assignment, running the vivification logic and then replacing the Scalar‘s $!descriptor with the simple one carrying the name and value, thus achieving the run-once semantics This not only makes Scalar smaller, but it means that we can use a single guard check to indicate the course of action we should take with the container: a normal assignment, or a vivification. The net result: vivification closures go away giving more possibility to inline, assignment gets easier to specialize, and we get a memory saving on every Scalar container. Nice! ### C you later For this to be really worth it from an optimization perspective, I needed to eliminate various bits of C special-case code around Scalar and replace it with standard MoarVM ops. This implicated: • Assignment • Atomic compare and swap • Atomic store • Handling of return values, including decontainerization • Creation of new Scalar containers from a given descriptor The first 3 became calls to code registered to perform the operations, using the 6model container API. The second two cases were handled by replacing the calls to C extops with desugars, which is a mechanism that takes something that is used as an nqp::op and rewrites it, as it is compiled, into a more interesting AST, which is then in turn compiled. Happily, this meant I could make all of the changes I needed to without having to go and do a refactor across the CORE.setting. That was nice. So, now those operations were compiled into bytecode operations instead of ops that were really just calls to C code. Everything was far more explicit. Good! Alas, the downside is that the code we generate gets larger in size. ### Optimization with spesh plugins talked about specializer plugins in a recent post, where I used them to greatly speed up various forms of method dispatch. However, they are also applicable to optimizing operations on Scalar containers. The change to decontainerizing return values was especially bad at making the code larger, since it had to do quite a few checks. However, with a spesh plugin, we could just emit a use of the plugin, followed by calling whatever the plugin produces. Here’s a slightly simplified version of the the plugin I wrote, annotated with some comments about what it is doing. The key thing to remember about a spesh plugin is that it is not doing an operation, but rather it’s setting up a set of conditions under which a particular implementation of the operation applies, and then returning that implementation. nqp::speshreg('perl6', 'decontrv', sub ($rv) {
# Guard against the type being returned; if it's a Scalar then that
# is what we guard against here (nqp::what would normally look at
# the type inside such a container; nqp::what_nd does not do that).
nqp::speshguardtype($rv, nqp::what_nd($rv));

# Check if it's an instance of a container.
if nqp::isconcrete_nd($rv) && nqp::iscont($rv) {
# Guard that it's concrete, so this plugin result only applies
# for container instances, not the Scalar type object.
nqp::speshguardconcrete($rv); # If it's a Scalar container then we can optimize further. if nqp::eqaddr(nqp::what_nd($rv), Scalar) {
# Grab the descriptor.
my $desc := nqp::speshguardgetattr($rv, Scalar, '$!descriptor'); if nqp::isconcrete($desc) {
# Has a descriptor, so rw. Guard on type of value. If it's
# Iterable, re-containerize. If not, just decont.
nqp::speshguardconcrete($desc); my$value := nqp::speshguardgetattr($rv, Scalar, '$!value');
nqp::speshguardtype($value, nqp::what_nd($value));
return nqp::istype($value,$Iterable) ?? &recont !! &decont;
}
else {
nqp::speshguardtypeobj($desc); return &identity; } } # Otherwise, full slow-path decont. return &decontrv; } else { # No decontainerization to do, so just produce identity. return &identity; } });  Where &identity is the identity function, &decont removes the value from its container, &recont wraps the value in a new container (so an Iterable in a Scalar stays as a single item), and &decontrv is the slow-path for cases that we do not know how to optimize. The same principle is also used for assignment, however there are more cases to analyze there. They include: • When the type constraint is Mu, and there is a normal (non-vivify) descriptor, then we do a specialization based on the value being the Nil object (in which case we produce the operation that set $!value back to the default value from the descriptor) or non-Nil (just assign a value, with no need to type check)
• When the type constraint is something else, and there is a normal (non-vivify) descriptor, then we do a specialization based on the type of the descriptor being assigned. Since the optimizer will often know this already, then we can optimize out the type check
• When it is an array auto-viv, we produce the exact sequence of binds needed to effect the operation, again taking into account a Mu type constraint and a type constraint that needs to be checked

Vivifying hash assignments are not yet optimized by the spesh plugin, but will be in the near future.

The code selected by the plugin is then executed to perform the operation. In most cases, there will only be a single specialization selected. In that case, the optimizer will inline that specialization result, meaning that the code after optimization is just doing the required set of steps needed to do the work.

### Next steps

Most immediately, a change to such a foundational part of the the Rakudo Perl 6 implementation has had some fallout. I’m most of the way through dealing with the feedback from toaster (which runs all the ecosystem module tests), being left with a single issue directly related to this work to get to the bottom of. Beyond that, I need to spend some time re-tuning array and hash access to better work with these changes.

Then will come the step that this change was largely in aid of: implementing escape analysis and scalar replacement, which for much Perl 6 code will hopefully give a quite notable performance improvement.

This brings me to the end of my current 200 hours on my Perl 6 Performance and Reliability Grant. Soon I will submit a report to The Perl Foundation, along with an application to continue this work. So, all being well, there will be more to share soon. In the meantime, I’m off to enjoy a week’s much needed vacation.

## samcv: Secure Hashing for MoarVM to Prevent DOS Attacks

### Published on 2018-05-16T07:00:00

Hashes are very useful data structures and underlie many internal representations in Perl 6 as well as being used as themselves. These data structures are very nice since they offer O(1) insertion time and O(1) lookup time on average. Hashes have long been considered an essential feature for Perl, much loved by users. Though when exploited, hashes can cause servers to grind to a halt. New in Rakudo Perl 6 2018.5 will be a feature called hash randomization which does much to help protect against this attack. In this article I explain some hashing basics as well as how the attack against non-randomized hashing can work.

## Hashing Basics

Some hashing basics: when we use a hash, we take a string and come up with a unique integer to represent the string. Similar to how md5 or sha1 sums take an arbitrary amount of data and condense it into a shorter number which can identify it, we do a similar thing for strings.

my %hash; %hash<foo> = 10

In this code, MoarVM takes the string foo and performs a hashing function on it using a series of bitwise operations. The goal is to create a shorter number which allows us to put the foo key into one of the 8 buckets that MoarVM initializes when a hash is created.

Our hashing code sets up a predefined number of buckets . When a bucket fills up to have 10 items it doubles the number of buckets. In normal operation the hashes will be randomly distributed, so it would take ≈47 keys added (≈47 is the average number of items to result in one bucket being filled to 10 items) before we have to expand the buckets the first time.

When the buckets are expanded, we will now have 16 buckets. In normal operation our previous ≈47 items should be evenly distributed into those 16 buckets.

## The Attack

Without a random hash seed it is easy for an attacker to generate strings which will result in the same hash. This devolves to O(n️2) time for the hash lookup. This O(n2) is actually O(string_length * num_collisions). When we have hash collisions, that means that no matter how many times we double the number of buckets we have, the strings which have hash collisions will always remain in the same bucket as each other. To locate the correct string, MoarVM must go down the chain and compare each hash value with the one we’re looking for. Since they are all the same, we must fall back to also checking each string itself manually until we find the correct string in that bucket.

This attack is done by creating a function that essentially is our hashing function backward (for those curious see here for an example of code which does forward and backward hashing for Chrome V8 engine’s former hashing function). We hash our target string, t. We then use random 3 character sequences (in our case graphemes) and plug them into our backward hashing function along with the hash for our target t. The backward hash and the random character sequence are stored in the dictionary and the process is repeated until we have a very large number of backward hash’s and random 3 grapheme prefixes.

We can then use this dictionary to construct successively longer strings (or short if we so desire) which are the same hash as our target string t. This is a simplification of how the Meet-In-The-Middle attack works.

This has been fixed in most programming languages (Python, Ruby, Perl), and several CVE’s have been issued over the years for this exploit (See CVE’s for PHP, OCaml, Perl, Ruby and Python).

Assuming everything is fine for the next release I will also merge changes which introduce a stronger hashing function called SipHash. SipHash is meant to protect against an attacker discovering a hash secret remotely. While randomizing the seed makes this attack much harder, a determined attacker could discover the hash and if that is done they can easily perform a meet in the middle attack. SipHash was designed to solve the vulnerability of the hash function itself to meet-in-the-middle attacks. Both the randomization of the hash secret in addition with a non-vulnerable hashing function work work together to avert hash collision denial of service attacks.

While the hash secret randomization will be out in Rakudo 2018.05, SipHash is planned to be introduced in Rakudo 2018.06.

## Randomness Source

On Linux and Unix we prefer function calls rather than reading from /dev/urandom. There are some very important reasons for this.

Relying on an external file existing is potentially problematic. If we are in a chroot and /dev is not mounted we will not have access to /dev/urandom. /dev/urandom is not special, it can be deleted by accident (or on purpose) or a sparse data file mounted in its place undetectable by programs. Trusting it simply because of its path is not ideal. Also, if we exhaust the maximum number of open file descriptors we will be unable to open /dev/urandom as well.

### System Functions

On Windows we use the pCryptGenRandom which is provided by advapi32.dll since Windows XP.

Linux, FreeBSD, OpenBSD and MacOS all use system provided random calls (if available) to get the data rather than having to open /dev/urandom. All these OS’s guarantee these calls to be non-blocking, though MacOS’s documentation does not comment on it. This is mostly important in very early userspace, which bit Python when a developer accidentally changed the randomness source causing systems which relied on very early Python scripts to stop booting due to waiting for randomness source to initialize.

If the function doesn’t exist we fall back to using /dev/urandom. If opening or reading it fails, on BSD’s we will use the arc4random() function. In many BSD’s this is seeded from the system’s random entropy pool, providing us with a back up in case /dev/urandom doesn’t exist.

On Linux we use the getrandom() system call which was added to kernel 3.17 instead of using the glibc wrapper since the glibc wrapper was added much later than to the kernel.

On MacOS, Solaris and FreeBSD we use getrandom() while on OpenBSD we use getentropy().

## User Facing Changes

From Rakudo Perl 6 2018.05, the order that keys are returned will be random between each perl6 instance.

perl6 -e 'my %hash = <a 1 b 1 c 1 d 1 e 1 f 1>; say %hash.keys'
(d f c a b e)
perl6 -e 'my %hash = <a 1 b 1 c 1 d 1 e 1 f 1>; say %hash.keys'
(e f a d c b)

This will also effect iterating a hash without sorting: for %hash { }

### What Do I Have To Do?

Users and module developers should make sure that they explicitly sort hashes and not rely on a specific order being constant. If you have a module, take a look at the code and see where you are iterating on a hash’s keys and whether or not the order of processing the hash’s keys affects the output of the program.

# This should be okay since we are putting the hash into another hash, order
# does not matter.
for %hash.keys -> $key { %stuff{$key} = $i++; } # This can potentially cause issues, depending on where @stuff is used. for %hash.keys ->$key {
@stuff.push: $key; } # This should be OK since we are using is-deeply and comparing a hash with another # hash is-deeply my-cool-hash-returning-function($input), %( foo => 'text', bar => 'text', baz => 'text');
# Probably best to avoid using is. The is test function converts the input to a string before
# checking for equality, but works since we stringify it in sorted order.
is %hash,  %( foo => 'text', bar => 'text', baz => 'text');
# NO. Keys are not guaranteed to be in the same order on each invocation
is %hash.keys, <a b c d>;

### Module Developers

Module developers should check out the git master of Rakudo, or if 2018.05 has been released, use that to run the tests of your module. Make sure to run the tests multiple times, ideally at least 10 times or use a loop:

while prove -e 'perl6 -Ilib'; do true; done

This loop will run again and again until it encounters a test failure, in which case it will stop.

You must run your tests many times because the hash order will be different on each run. For hashes will a small number of items, it may not fail on every run. Make sure that you also look at the source to identify items that need fixing; don’t just rely on the test’s to tell you if you must make changes to your module.

Hardening Perl’s Hash Function, article by Booking.com about changes Perl 5 has made to harden hashing.

## Strangely Consistent: Has it been three years?

007, the toy language, is turning three today. Whoa.

On its one-year anniversary, I wrote a blog post to chronicle it. It seriously doesn't feel like two years since I wrote that post.

On and off, in between long stretches of just being a parent, I come back to 007 and work intensely on it. I can't remember ever keeping a side project alive for three years before. (Later note: Referring to the language here, not my son.) So there is that.

So in a weird way, even though the language is not as far along as I would expect it to be after three years, I'm also positively surprised that it still exists and is active after three years!

In the previous blog post, I proudly announce that "We're gearing up to an (internal) v1.0.0 release". Well, we're still gearing up for v1.0.0, and we are closer to it. The details are in the roadmap, which has become much more detailed since then.

Noteworthy things that happened in these past two years:

Things that I'm looking forward to right now:

• Landing macro infix:<ff> in master, which is quite literally one small fix away at this point.
• Landing a huge object system refactor that really cleans up the language.
• is parsed, also only a few steps away.
• Making a better web site, focused around language tutorials and API documentation.
• Writing a 007 parser in pure 007.

I tried to write those in increasing order of difficulty.

All in all, I'm quite eager to one day burst into #perl6 or #perl6-dev` and actually showcase examples where macros quite clearly do useful, non-trivial things. 007 is and has always been about producing such examples, and making them run in a real (if toy) environment.

And while we're not quite over that hump yet, we're perceptibly closer than we were two years ago.

Belated addendum: Thanks and hugs to sergot++, to vendethiel++, to raiph++ and eritain++, for sharing the journey with me so far.