pl6anet

Perl 6 RSS Feeds

Steve Mynott (Freenode: stmuk) steve.mynott (at)gmail.com / 2019-04-22T10:11:16


gfldex: Wrapping a scope

Published by gfldex on 2019-04-21T14:00:34

Intricate instruments like *scopes can be quite temparatur sensitive. Wrapping them into some cosy insulator can help. With Perl 6 it is the other way around. When we wrap a Callable we need to add insulation to guard anything that is in a different scope.

On IRC AndroidKitKat asked a question about formatting Array-stringification. It was suggested to monkeytype another gist-method into Array. He was warned that precompilation would be disabled in this case. A wrapper would avoid this problem. For both solutions the problem of interference with other coders code (in doubt that is you halve a year younger) remains. Luckily we can use dynamic variables to take advantage of stack magic to solve this problem.

Array.^can('gist')[0].wrap(
    sub (\a){ 
        print 'wrapped: '; 
        $*dyn ?? a.join(',') !! nextsame 
    }
);

my @a = [1,2,3];

{
    my $*dyn = True;
    say @a;
}

say m(@a);
say @a;

# output:
wrapped: 1,2,3
wrapped: [1 2 3]

Dynamic variables don’t really have a scope. They live on the stack and their assigned value travels up the call tree. A wrapper can check if that variable is defined or got a specific value and fall back to the default behaviour by calling nextsame if need be. Both.wrap and dynamic variables work across module bounderies. As such we can make the behaviour of our code much more predictable.

This paragraph was meant to wrap things up. But since blogs don’t support dynamic variables I better stop befor I mess something up.

Jo Christian Oterhals: Perl 6 small stuff #17: a weekly challenge of Big Pi’s, Bags and modules

Published by Jo Christian Oterhals on 2019-04-15T18:42:20

I didn’t have the time to solve last week’s challenge, but easter has started with slower days — so I decided to try my hand at the Perl Weekly Challenge number 4.

In my opinion, exercise 1 and 2 was not beginner vs advanced this time; both were peculiar. The first exercise was this:

Write a script to output the same number of PI digits as the size of your script. Say, if your script size is 10, it should print 3.141592653.

Thinking about this I thought that Perl 5 seemed to be easiest this time, as the Math::BigFloat package has the method bpi that returns PI to the given precision (i.e. bpi(10) returns 3.141592654). All I’d have to do was to figure out the file size of the script itself and return PI to that precision. I.e. a Perl 5 answer would look something like this:

#!/usr/bin/env perl
use v5.18;
use Math::BigFloat 'bpi';
say bpi(-s $ARGV[0]); 

I’m uncertain as to whether the size of the script means the number of characters in the script file, or whether it is the size of the script in bytes (in a unicode world those two aren’t necessarily identical). I choose to believe it’s the script size in bytes we’re talking about.

Hadn’t it been for the fact that Perl6 do not have — as far as I know — an equivalent to bpi built-in, a Perl 6 answer must implement such a functionality. But: If I put that code into the script itself, the answer would probably be so long that it exceeded the numbers of digits of PI a bpi implementation like Perl 5’s could return.

So this gave me an excellent opportunity to introduce modules as a part of the solution.

Script: PWC004-01.p6
Usage: perl6 -I. PWC-004-01.p6
Output: 3.141592653589793238462643383279502884197169399375105820974945
#!/usr/bin/env perl6
use BigPI;
say BigPI::pi $?FILE.IO.s;

A couple of fun things about this: $?FILE referes to the script file itself. If you prefer a more self explanatory variant, $*PROGRAM-NAME can be used instead of $?FILE.

.IO.s returns the size of the file in bytes and, as mentioned above, seems to be what the exercise calls for. However, see note [1] below if you’d rather want the size to be the number of characters instead. And if you’ve mixed unicode into your Perl 5 script, see [2] for some Perl 5 specific notes.

Anyway, the module referenced here is stored in a separate file, and looks like this.

Module: BigPI.pm6
# Place in script directory and use perl6 with -I flag
# i.e. perl6 -I. <calling script>
unit module BigPI;
# This definition of PI is borrowed from Perl 5's Math::BigFloat
constant PI = join '', qw:to/END/;
314159265358979323846264338327950288419716939937510582097494459230781640628
620899862803482534211706798214808651328230664709384460955058223172535940812
848111745028410270193852110555964462294895493038196442881097566593344612847
564823378678316527120190914564856692346034861045432664821339360726024914127
372458700660631558817488152092096282925409171536436789259036001133053054882
046652138414695194151160943305727036575959195309218611738193261179310511854
807446237996274956735188575272489122793818301194912983367336244065664308602
139494639522473719070217986094370277053921717629317675238467481846766940513
200056812714526356082778577134275778960917363717872146844090122495343014654
958537105079227968925892354201995611212902196086403441815981362977477130996
051870721134999999837297804995105973173281609631859502445945534690830264252
230825334468503526193118817101000313783875288658753320838142061717766914730
359825349042875546873115956286388235378759375195778185778053217122680661300
192787661119590921642019893809525720106548586327886593615338182796823030195
END
our sub pi(Int $precision where * <= PI.chars = 10) {
return "3." ~ PI.substr(1, $precision - 2) ~ round(PI.substr($precision - 1, 2) / 10);
}

The first line, unit module BigPI;, tells the interpreter that everything that follows is a part of this single module. If I had wanted to put several modules into one file, I’d define them with module NAME { …content… }.

To avoid collision with the built-in pi, I used the our sub statement. This means that you have to refer to the method using the full name, i.e. BigPi::pi. You could have chosen to export pi instead (is export), which would have let you refer to pi directly. But I prefer the other version to avoid name space collisions.

This tiny module also showcases a couple of other Perl 6 specific things. One, I use Perl 6’s built-in mechanics for defining constraints within the sub routine signature. In this case I tell the Perl 6 compiler to not allow precision higher than the number of digits in the PI constant (where * <= PI.chars). If you call the method with a higher precision, an error will be thrown. In addition I define a default precision of 10 should you want to call BPI::pi without an argument (=10 in the end).

The second exercise was this:

You are given a file containing a list of words (case insensitive 1 word per line) and a list of letters. Print each word from the file than can be made using only letters from the list. You can use each letter only once (though there can be duplicates and you can use each of them once), you don’t have to use all the letters.

I have to admit I had to read this several time before understanding it, and I have a feeling that I may have even now. But I read it like this: Every word in the file is to be checked against a list of letters. That list can contain duplicates. If the word “abba” is checked against a list <a a a a b b b c d d e>, two a’s and two b’s will be removed from the list, so that what’s left is <a a b c d d e> before checking that against the next word. If the next word is “be”, the list is reduced to <a a c d d>. Had the word been bee, however, there would be no match and the list would still be <a a b c d d e> when checking the next word.

#!/usr/bin/env perl6
my $letters = ( gather for 1..500 { take ('a'..'z').pick } ).BagHash;
say ([+] $letters.values) ~ " letters matches...";
for "random-2000.dict".IO.lines.sort({ rand }) -> $word {
my $wbh = $word.lc.comb.BagHash;
if ($wbh (<=) $letters) {
$letters = $letters (-) $wbh;
say "\t" ~ $word;
}
}
say ([+] $letters.values) ~ " letters remain.";

This script assumes there a file called “random-2000.dict” in the working directory, containing a list of words (I have a list of 2000 random words in that file). It also assumes, perhaps naively, that the words are made of the letters a to z, i.e. standard english.

I start the script by generating a list of 500 characters, a random selection of a to z’s. You’ve seen me use gather and take before, so I won’t explain that here again. Rather I’d like to point out that the immediately, because I convert the list to a BagHash. A BagHash is a short and simple version of stuff you’ve programmed manually in Perl 5 before. The letters a to z are keys of this hash, and the value for each key is the number of occurences of that letter (if the value is zero, the key is removed).

I loop trough a randomly sorted list of words, and converts each word into a BagHash of its own. And now I can start to use some really useful infixes that works on BagHashes. In the statement if ($wbh (<=) $letters) the infix (<=) compares the word’s BagHash with the big bag of letters. If the word is contained by or equal to the bag of letters, we have a matching word and can print it to screen. At the same time I remove the letters of the word from the bag of letters by using the infix (-). So that a word <a b b a> removed from the list of letters <a a a a b b b c d d e> would reduce that list to <a a b c d d e> before testing the next word. The infix saves us from writing a lot of code here!

I’m glad the exercise stated that I didn’t have to use all of the letters, because I never seem to be able to. Every time I run this script a couple of hundred letters remains.

You should also note the use of the prefix [+] in this code. Used on a list/an array, it sums all of the elements of that list. It’s not strictly necessary to use it here; I use it only to output some information about the list of letters before and after the run.

Here is the output of the script when I run it:

500 letters matches
ingrainedly
regarding
trichinopoly
solenoidal
prophloem
beggary
paravertebral
tapinophobia
awarder
isocratic
mucic
tox
handwheel
suasible
skiptail
hippoboscid
colopexy
undeviating
onetime
stiped
housekeep
cocci
adjudgment
genys
Boche
fletch
nunky
l
huck
whuff
smug
hut
Lulu
259 letters remain.

This was a great challenge. As simple as they may seem at first glance, they got me to use several Perl 6 specific techniques. It was a great way to “hone my skills” so to speak.

NOTES

[1] If size is means as “the size of the file in bytes”, “filename”.IO.s returns exactly what you want. But as mentioned above, files with Unicode characters in them can potentially have several times more bytes than characters.

A file containing a single character — a — is one byte long. So here the number of bytes and characters are equal. But another file containing this single character — 🕹 — is four bytes long.

So which is it? What represents “size” most correct? I’m not sure. I guess bytes is the safe bet. But should you want to count characters and use that number for PI precision, do this instead:

say BigPI::pi chars slurp $?FILE;

There’s more than one way to do it. And interpret it I guess.

[2] If you’re programming Perl 5, you’ve got similar issues. -s "filename" returns the size of a file in bytes while length($variable) returns the number of characters. Or so you should think. But consider this script:

#!/usr/bin/perl

use v5.18;

$a = "🕹";
$b = "j";

say length($a);
say length($b);

One would think that both statements returns 1. But in this case the first say prints 4 and the second prints 1. It’s as if length returns bytes and not characters. Why?

Perl 5 has had some level of unicode support since v5.6 from March 2000 (though, I’m sorry to say, feels like an afterthought even in the latest development versions 19 years later). You can turn much of the unicode support on by, for instance, calling the perl interpreter with the -C flag. But you’ll have to tell the interpreter even more explicitly if your script file itself includes utf8 text (Perl 5 assumes no utf8 by default).

The above script does include utf8, so add a use utf8; at the start of the script to force perl to interpret utf8 characters as characters, not bytes. Now both say statements will print 1 as expected. Rather unexpectedly, perhaps, you even get the possibility to use utf8 letter characters in sub routine and scalar names when you’ve added the use statement in the beginning.

#!/usr/bin/perl

use v5.18;
use utf8;

my $a = "🕹";
my $b = "j";
my $珠 = "perl utf8"; # Throws an error without 'use utf8'

say length($a);
say length($b);
say $珠;

Not quite Perl 6 level yet, but it’s getting there.

Weekly changes in and around Perl 6: 2019.15 Schrödinger

Published by liztormato on 2019-04-15T13:25:01

New Perl 6 blogger Tyler (aearnus) describes his discovery of Junctions in Perl 6 in a blog post called: GADTs and Superpositions in Perl 6, a really unique and flexible way of looking at problems – one that’s highly inspired by the functional, data-oriented paradigm. (Hacker News comments).

Weekly Challenge

Again, quite a few blog posts because of the third Perl Weekly Challenge. These are the blog posts with Perl 6 solutions:

And of note, the polyglot solution by Nick Logan, which runs in both Perl 5 and Perl 6.

Perl Toolchain Summit

Neil Bowers further reports on the plans for the coming Perl Toolchain Summit, in part brought to you by CPanel.

Perl DevRoom at Spanish FOSDEM

There will be a Perl DevRoom on 21 June at the esLibre 2019, which one could consider the “Spanish FOSDEM”. Please add your proposal for a presentation as a Pull Request to proposal repository. So far, it looks like there is a Perl 6 Tutorial on the menu already!

Nightly Docker images

Patrick Spek describes how his scripts are creating a Perl 6 Docker image every night on the various Linux flavours, and how that compares to the work that Tony O’Dell has done.

Lucky Arch Linux users

It appears that the Comma IDE Community Edition is now available to Arch Linux users if they activated the Arch User Repository in their package manager. And it appears to work like a charm.

Grant Voting Results

The Perl Foundation Grant Committee decided against the only proposal of this round: A Complete (Interactive) Perl 6 Course with Exercises by Andrew Shitov. This is sad news, but maybe not the last we’ve heard of this (Facebook comments).

Picat spacing out

Jeff Goff describes his work on creating a grammar for the Picat language. Which exposed a whitespace gotcha in grammars. (Facebook, Reddit comments).

A Language Creators’ Conversation

The PuPPy event at which Guido van Rossum, James Gosling, Larry Wall & Anders Hejlsberg sat together, has been neatly summarised in a blog post by David Cassel. Which is extra nice since the audio of the video is very bad.

Graphing DB schema

Edouard Klein got into programming in Perl 6 with a very nice command line script that turns CREATE TABLE statements into a graph (Reddit comments).

OWASP Perl 6 Wiki?

Charlie Gonzalez noticed that Perl 5 and Perl 6 frameworks are not very well represented on the Open Web Application Security Project. Volunteers are invited to take the necessary actions to remedy the situation.

Toggle Grayscale

Ricky Morse got inspired by a Python article about toggling the MacOS screen between black-and-white and colour. His Perl 6 solution with NativeCall is remarkably simple.

Core developments

Questions about Perl 6

Meanwhile on Facebook

Meanwhile on perl6-users

Meanwhile on Twitter

Perl 6 in comments

Perl 6 Modules

New modules:

Updated modules:

Winding Down

A week with some nice new speed improvements. And again a nice crop of blog posts. Feels like spring! See you next week for more uplifting Perl 6 news!

Weekly changes in and around Perl 6: 2019.14 Challenge Taking Off

Published by liztormato on 2019-04-08T13:13:38

The Perl Weekly Challenge has generated quite a few submissions and associated blog posts (and of course the repository with submitted solutions). There’s now also a recap of the first challenge by Mohammad S Anwar. Keeping track of all the blog posts has become quite a job. Hopefully yours truly didn’t miss any in this overview:

Check out the guide for submissions if you’re thinking about adding your own Perl 6 solutions.

A Language Creators’ Conversation

Guido van Rossum, James Gosling, Larry Wall & Anders Hejlsberg sat together at the latest PuPPy meeting in Seattle and discussed language creation. Sadly, the audio of the video is very bad. Regardless of that, it spurred quite a discussion on Hacker News. Good to see Larry up and about!

Looking for Grant Committee Members

The TPF Grant Committee is looking for new members. Being a member involves needing to read any official grant proposals (which you might be reading already anyway), and submitting a vote once every 2 months or so, possibly after some mailing list discussion. (Facebook comments).

YACM

Or, welcome Patrick Böker, our latest Yet Another Core Member on the Rakudo Perl 6 project. Patrick was recently involved in making the build of Rakudo Perl 6 completely relocatable, which is very much welcomed by various packagers of Rakudo Perl 6. Looking forward to see much more of this good work!

Aearnus looking at Perl 6

A student at the University of Arizona for mathematics and theatre has written two blog posts about Perl 6 in the past week, which both created quite a stir.

The first one titled “A Whirlwind Tour of Perl 6’s Best Features” starts with:

It’s rare that I find a language that I truly feel innovates upon established conventions and features.

(Reddit comments). This blog post also started a large discussion on /r/programming titled Maybe it’s finally time to give Perl 6 a shot.

The second blog post was titled “Perl 6 is the World’s Worst ML (with addendum by Damian Conway)” (ML on Wikipedia), with quite a few comments on /r/perl6 and Hacker News.

Nice to see two such positive blog posts coming from an unexpected source!

Javascript backend update

Paweł Murias reports on the progress of the work on the Javascript backend of Rakudo Perl 6. Precompilation is still an issue. And a bug found in Chrome. And future plans! Kudos again to Paweł Murias for all this hard work. So good to see it coming to fruition!

Practical Perl 6 Regexes

Brian Duggan has published the slides of his presentation about Practical Perl 6 Regexes given at the DC Baltimore Perl Workshop last weekend.

Perl 6 not so full of art

A blog post showing that 93% of Paint Splatters are Valid Perl Programs completely disregarded the fact that with use strict (which has been recommended to be used always for at least past 25 years) this number would be closer to zero. Ah well, some people get stuck in the “then they laugh at you” phase (Hacker News comments with some Perl 6 references).

Swiss Perl Workshop CFP

About a week after the European Perl Conference, there will be the Swiss Perl Workshop 2019 in Olten on 16 and 17 August. The Call for Papers has been opened. Please submit your Perl 6 presentations for what looks it’s going to be another nice and cosy Swiss Perl Workshop!

Core developments

Questions about Perl 6

Meanwhile on Facebook

Meanwhile on perl6-users

Meanwhile on Twitter

Perl 6 in comments

Perl 6 Modules

New modules:

Updated modules:

Winding Down

An exciting week with many submissions, blog posts and quite a few positive comments and support from unexpected corners of the interwebs. Tis looking a lot like spring! Please check in again next week for more news about Perl 6!

Jo Christian Oterhals: Perl 6 small stuff #16: All your base are belong to us

Published by Jo Christian Oterhals on 2019-04-02T14:21:16

It’s the second week of the Perl Weekly Challenge, and like last week we’ve got two assignments — one “beginner” and one “advanced”.

The advanced assigment this time was: “Write a script that can convert integers to and from a base35 representation, using the characters 0–9 and A-Y.” Even though this is a blog mainly about Perl 6 I thought it’d be fun to start with my Perl 5 solutions to the advanced assigment, just so it’s even more easy to appreciate the simplicity of the Perl 6 solution… although not, as you will see, without some discussion.

PERL 5
# Convert from base35 to base10
perl -E '%d = map { $_ => $c++ } (0..9,A..Y); $i = 1; for (reverse(split("", @ARGV[0]))) { $e += $i * $d{$_}; $i = $i * 35; } say $e' 1M5
# Output: 2000
# Convert from base10 to base35
perl -E '%d = map { $c++ => $_ } (0..9,A..Y); while ($ARGV[0] > 0) { push @n, $d{$ARGV[0] % 35}; $ARGV[0] = int($ARGV[0] / 35); } say join("", reverse(@n));' 2000
# Output: 1M5

So these are working one-liners but hardly readable ones. They also violate a lot of best practices. So I expanded them into a full script that’s easier to reuse and understand with added strict and error handling as well as support for positive/negative (+and - prefixes).

#!/usr/bin/env perl
#
# Usage:
# perl base35.pl [+-]NUMBER FROM-BASE, e.g.
#
# perl base35.pl 1000 10
# Output: SK
#
# perl base35.pl SK 35
# Output: 1000
#
# perl base35.pl -SK 35
# Output: -1000
use v5.18;
say base35_conv(@ARGV);
sub base35_conv {
my ($no, $base) = (uc(shift), shift);
if ($base != 10 && $base != 35) {
warn "Not a valid base, must be 10 or 35";
return -1;
}
if (($base == 35 && $no !~ /^[\+\-]{0,1}[0-9A-Y]+$/) || ($base == 10 && $no !~ /^[\+\-]{0,1}[0-9]+$/)) {
warn "You have to provide a valid number for the given base";
return -1;
}
my ($c, $e) = (0, 0);
my $prefix = $no =~ s/^(\+|-)// ? $1 : "";
my %d = map { if ($base == 35) { $_ => $c++ } else { $c++ => $_ } } (0..9,'A'..'Y');
if ($base == 35) {
my $i = 1;
for (reverse(split("", $no))) {
$e += $i * $d{$_};
$i = $i * 35;
}
}
else {
my @digits;
while ($no > 0) {
push @digits, $d{$no % 35};
$no = int($no / 35);
}
$e = join("", reverse(@digits));
}
return ( $prefix ? $prefix : "" ) . $e;
}

There’s really not much to comment about the code above. It works and is reasonably readable. It’s quite long, however, and that’s where Perl 6 comes in and destroys it.

PERL 6
# Convert from base35 to base10
perl6 -e 'say "1M5".parse-base(35)'
# Output: 2000
# Convert from base10 to base35
perl6 -e 'say 2000.base(35)'
# Output: 1M5

At this you’re allowed to stop for a second an appreciate the simplicity of Perl 6. But:

Since these are built-in functions in Perl 6 this wasn’t — in my opinion — the best Perl 6 assigment. I guess the point of the assigment is to write a solution from scratch —had I solved the Perl 5 version of the assigment by using a ready-made CPAN module such as Math::Int2Base I’d feel that I cheated. Maybe that’s just me?

As for the “beginner” assigment this time — “Write a script or one-liner to remove leading zeros from positive numbers” — my Perl 6 and Perl 5 solutions are identical:

perl -E 'say "001000"*1;'
perl6 -e 'say "001000"*1;'
# Both output: 1000

Although the assignment wants a script that removes leading zeroes from positive numbers, this will just as easily remove them from negative numbers as well. These will also work on floating point numbers:

perl -E 'say "-001000"*1;'
perl6 -e 'say "-001000"*1;'
# Both output: -1000
perl -E 'say "001000.234"*1;'
perl6 -e 'say "001000.234"*1;'
# Both output: 1000.234

You can take it one step further with Perl 6, though. Should you for some reason — and I’m not able to think of a good one to be honest — want to do the same on a fraction, this is the way to do it:

perl6 -e 'say "003/4".Rat.nude.join("/");'
# Output: 3/4

.nude returns a two-element list with the numerator and denominator so that we can choose how to represent it (a naive say "3/4"*1; would print 0.75 and would therefore not be a satisfying solution considering how the assignment is specified).

So that’s it for now. It may sound a little silly to write this in a Perl 6 centric blog, but what made the assignment interesting this week was Perl 5.

I look forward to next week’s assignment already.

Weekly changes in and around Perl 6: 2019.13 No Jokes Today

Published by liztormato on 2019-04-01T12:38:34

(this space intentionally left blank)









Rakudo Star Release 2019.03

Naoum Hankache has announced the release of Rakudo Star 2019.03, the first Rakudo Star release to feature support for Perl 6.d. Thanks to Naoum Hankache and his team in getting this brave new release out of the door. And kudos to Steve Mynott for having done a Rakudo Star release so many times before!

Perl Weekly Challenge Fallout

The first Perl Weekly Challenge generated quite some blog posts and tweets. Here’s a selection that (also) mention Perl 6 solutions:

A second challenge has been published already! More Perl 6 solutions will be very welcome!

Zef plugins

Tony O’Dell introduces his work on creating zef plugins with an example of implementing a config parameter for zef for configuration management. This looks like some good ideas of git plugins have been assimilated!

I like Rakudo 100x

Wenzel P. P. Peppmeyer found a strange regression in Perl 6 and blogged about it. Which resulted in a Travis-CI test that should turn green whenever the bug gets fixed.

Staying composed

Paul Cochrane was frustrated about not being able to enter π on his keyboard, researched it and blogged about it extensively (Reddit comments).

Conditional whenever

Wenzel P. P. Peppmeyer actually wrote a second blogpost this week, this time about filtering the output of iostat, which uses a Supply that does nothing. I guess that is similar to Empty.

Space case

An interesting discussion about an addition to the group of CamelCase, snake_case and kebab-case: the space case, which looks like allowing space characters as part of identifiers. Shudder.

Perl 6 gather, I take

Arne Sommer has written a blog about gather and take, in which he shows several approaches to scrolling lines of text on a screen with a given delay. All part of preparations for his class at PerlCon 2019 (Reddit comments). .

No more Perl 6 Weekly

Yours truly explains why she won’t post the Perl 6 Weekly to the /r/perl Reddit anymore (Facebook comments).

Core developments

Questions about Perl 6

Meanwhile on Facebook

Meanwhile on perl6-users

Meanwhile on Twitter

Perl 6 in comments

Perl 6 Modules

New modules:

Updated modules:

Winding Down

The original main article of this Perl 6 Weekly was an April Fool’s prank that involved both Perl 5 and Perl 6. It seems however, that yours truly would be the last person that should be allowed to do such a prank. It’s all serious business. Unjokingly, eviction from the community was suggested. Hopefully see you next week for less serious Perl 6 news. 🙂

gfldex: Conditional whenever

Published by gfldex on 2019-03-29T13:07:34

I wrote a script to filter iostat because the latter either displays too much or too little. It also doesn’t know about bcache. I wanted to have the script react the same way to pressing q then top, atop or iotop. But it should only watch the keyboard and quit when $*OUT is a terminal. First we need to read the keyboard.

whenever key-pressed(:!echo) {
    when 'q' | 'Q' { done }
}

Now we got a few options to add a check for an attached terminal

if $*OUT.t {
    whenever key-pressed(:!echo) {
        when 'q' | 'Q' { done }
    }
}
$*OUT.t && do whenever key-pressed(:!echo) {
    when 'q' | 'Q' { done }
}
do whenever key-pressed(:!echo) {
    when 'q' | 'Q' { done }
} if $*OUT.t

The last one kind of lines up with other whenever blocks but the condition gets harder to read. At first I thought it wont be possible to use ?? !! because whenever always wants to run .tap on the parameter. But then I remembered that we use 0x90 to tell a CPU to do nothing. If we get a Supply that does nothing we can opt out of doing something.

constant NOP = Supplier.new;
whenever $*OUT.t ?? key-pressed(:!echo) !! NOP {
    when 'q' | 'Q' { done }
}

Now it neatly lines up with other whenever blocks.

As a member of the Perl family Perl 6 has more then one way to do it. Most of them look a big odd though.

Jo Christian Oterhals: I have to say your Fizz Buzz solution is very elegant!

Published by Jo Christian Oterhals on 2019-03-29T10:37:26

I have to say your Fizz Buzz solution is very elegant! I understand that you felt you had to change your code a little to accomodate for the whitespace. I wonder why you chose to the modifications you did instead of just add a space at the end of buzz? Like this…

perl -E 'say +("fizz ")[$_ % 3] . (buzz)[$_ % 5] || $_ for 1 .. 20'

I guess some would say that — although invisible — printing an unnescessary character when only Fizz hits, is not very elegant. Was that your thinking as well?

Jo Christian Oterhals: Perl 6 small stuff #15: Long story about short answers to Perl Weekly Challenge no. 1

Published by Jo Christian Oterhals on 2019-03-28T08:08:34

Last week I discovered a new Twitter account, the Perl Weekly Challenge. This is an initiative by Mohammad Anwar where Perl 5 and 6 programmer will get a programming challenge every week. There will be challenges both for beginners and advanced programmers.

This week we got the first two-part challenge. The first — the beginner challenge — was to substitute every ‘e’ in the string “Perl Weekly Challenge” and count every occurence of ‘e’. The second — the expert challenge — was to program a one-liner that solves the Fizz Buzz challenge for every integer from 1 through 20. But if the integer is divisible with 3 or 5 or both, the integer should be replaced with Fizz, Buzz or FizzBuzz respectively.

Let’s start with the latter. If we omit the one-liner requirement, an easy and relatively obvious solution would look something like these two:

// JavaScript/Node solution using if statements
for (i = 0; i < 20; i++) {
if (i % 3 == 0 && i % 5 == 0) console.log("Fizz Buzz");
else if (i % 3 == 0) console.log("Fizz");
else if (i % 5 == 0) console.log("Buzz");
else console.log(i);
}
# A more Perl 6 variant with "switch/case" syntax
for 1..20 {
given $_ {
when $_ %% 3 && $_ %% 5 {
say "Fizz Buzz":
}
when $_ %% 3 {
say "Fizz";
}
when $_ %% 5 {
say "Buzz";
}
default {
say $_;
}
}
}

Now — one could always convert the Perl 6 code into a one-liner by removing the line feeds and adding ; after the }’s. But that would neither look nor be very elegant.

What I wanted was a solution that almost was — and almost read — like one single statement. What I came up with was this:

say gather { take "Fizz " if $_ %% 3; take "Buzz" if $_ %% 5 } || $_ for 1..20

If I may say so myself this code use gather in a rather clever way. Checking for fizz and buzz is no longer an either/or scenario, but an “and” scenario. gather/take will generate a one or two element list if the requirements are met; either (Fizz), (Buzz) or (Fizz Buzz) if both criteria are met. If none of the criteria are met, the || (“or”) selects and returns the integer instead.

The resulting output looks like this:

1
2
(Fizz )
4
(Buzz)
(Fizz)
7
8
(Fizz)
(Buzz)
11
(Fizz)
13
14
(Fizz Buzz)
16
17
(Fizz)
19
(Buzz)

Should you want to remove the parentheses from the output, add a join:

say join " ", gather { take "Fizz" if $_ %% 3; take "Buzz" if $_ %% 5; } || $_ for 1..20

But I wanted the code to be as short as possible — without join it is 9 characters shorter. The shorter version is the one I submitted.

Now, the beginner challenge was also an interesting one. Count the number of e’s in the string “Perl Weekly Challenge” and also replace them with capitalized e’s.

There are several ways to do this. The obvious way (perhaps) is this:

my $text = "Perl Weekly Challenge";
$text ~~ s:g/e/E/;
say $text;
say "E's: " ~ $text.comb('e').elems;

.comb filters out characters/elements matching a criteria — in my case ‘e’ — and returns them as a list. The elems routine returns the number of elements in the list.

But there had to be a more concise and perhaps elegant way to do this. Enter S: Capital S is a substitution operator that unlike its miniscule sibling — s — doesn’t change the string it’s used on, but returns a new string with changes instead. That means that you can do substitutions on immutable strings. Note that the syntax is a little different:

my $t = "test 1-2-3";
my $t2 = S:g/\d/NUMBER/ given $t;
say $t; # unchanged
say $t2;
# Output:
test 1-2-3
test NUMBER-NUMBER-NUMBER

That the S doesn’t attempt to change the original gives us the possibility to run it on immutable strings as well:

say S:g/i/I/ given "original";
# Output: 
orIgInal

I combined this with the possibility to run code within strings using the {} brackets. I placed an S substitution there so that…

say "{S:g/e/E/ given "Perl Weekly Challenge"}";
# Output:
PErl WEEkly ChallEngE

Now I’ve reduced the substitution and the printing to one line. To keep the final code in one line, I had to figure out a way to count the number of e’s without using a comb method. I figured I could use Perl 6’s built-in $/ array for that instead. This array contains the match objects created by the latest regex. Counting how many e’s there are is now just a matter of using the array method .elems which returns the number of objects.

The end result was this:

say "{S:g/e/E/ given "Perl Weekly Challenge"}, E={$/.elems}";
# Output:
PErl WEEkly ChallEngE, E=5

Note that the brackets are executed in the order they appear from left to right. That means that $/.elems will not return what you expect if you put it at the start of the string.

Anyway — the resulting code was short and clear, and I was quite satisfied with both this and the FizzBuzz code. Thanks to The Perl Weekly Challenge no. 1 I forced myself to think a little differently that I normally do, and employed techniques I hadn’t really employed before. A great learning experience.

POSTSCRIPT — the Perl 5 solutions
One of the best things of teaching myself Perl 6 is that it has rekindled my love for and interest in Perl 5. I now constantly discover that modern Perl 5 has evolved a lot since the Perl 5 learnt to know 20 years ago. Many of the things I like about Perl 6 is now available in Perl 5 — although with a different syntax.

Non-destructive substitutions is available in Perl 5 too, so the Perl 5 answer to the substitution exercise is not that different from Perl 6:

print "Perl Weekly Challenge" =~ s/e(?{$c++})/E/gr . ", E=$c\n";
# Output:
PErl WEEkly ChallEngE, E=5

The trick is the /r modifier turns on the non-destructive feature. This is one of the “new” things in Perl that I didn’t know about (I’m constantly reminded the amount of stuff I don’t know about Perl 5 outweighs what I know by a good margin).

What Perl 5 does not have, however, is a way to easily track how many substitutions are made. Enter the (?{}) part of the regex, a way to execute code within a regex. Here I just increment a variable $c, and print it after the substitutions are done. A little different from the Perl 6 version, but just as compact.

As for the FizzBuzz challenge the answer is quite different, but concise like its Perl 6 cousin:

print map { "$_\n" } ($_, qw{Fizz Buzz}, "Fizz Buzz")[!($_ % 3) + !($_ % 5) * 2 ] for (1..20);
# Output:
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
Fizz Buzz
16
17
Fizz
19
Buzz

For each integer 1..20 I create an ad hoc array consisting of the integer, Fizz, Buzz and Fizz Buzz. Which of the elements will be chosen is determined by the result of testing the integer mod 3 or integer mod 5. Success leaves zero as it should do; if unsucessful it returns the remainder.

Knowing this we can use the fact that Perl 5 doesn’t really have an understanding of true and false in the traditional way but rather assumes any 0 value to be false and any non-zero value to be true. So I flip the results using the ! operator: If int % 3 returns zero, it becomes true — or 1. The same goes for int % 5. I multiply the latter with 2, and add the results of the two test. The result is zero if neither of the mods returns 0; 1 if % 3 returns zero, 2 if % 5 returns zero and 3 if both % 3 and % 5 returns zero.

As it happens these numbers correspond to elements in the ad hoc array, so that 0 returns the integer itself, 1 Fizz, 2 Buzz and 3 Fizz Buzz.

In total the code returns a 20-element array with 1..20 or their Fizz Buzz replacements. Finally, for this to be printable I run it trough map and add a newline to the elements. In short: The Perl 5 version of the code is totally different from the Perl 6 version, but equally concise. I’m happy with both.

In any case — this has been so fun that I look forward to next week’s challenge.

NOTE
Compared with the original article, I made a few small adjustments both to the Perl 6 and 5 versions of the code, so that the Fizz Buzz matches are printed “Fizz Buzz” and not “FizzBuzz”. I didn’t know about that criteria before I read Dave Cross’s post about his solutions to these challenges. The additional code becomes marginally longer, but not significantly.

Death by Perl6: zef plugin - a very alpha glimpse

Published by Tony O'Dell on 2019-03-28T21:46:49

Disclaimer: This design could change before zef plugins is thrown into the world but this is the initial look at the plugin paradigm and something you can follow along and actually use today.

Let's take a look at implementing a plugin to let us edit/view config from the CLI without firing up a text editor and editing zef's config manually.

set up

Right now the plugin code is all residing in a branch other than zef master so we need to set up an alternate version of zef for us to use for the tutorial. This tutorial will use the alias zef-p to denote the version of zef capable of plugins. Doing a basic install using zef-p install will install modules the same as your system/local zef.

$ git clone https://github.com/ugexe/zef.git zef-p
# clones zef into directory zef-p
$ cd zef-p

zef-p$ git fetch origin allow-runtime-interface-plugins
# fetch the WIP plugins branch
zef-p$ git checkout allow-runtime-interface-plugins
# check out the plugins branch
zef-p$ alias zef-p="perl6 -I$(pwd)/lib $(pwd)/bin/zef"
# make an alias to make using our plugin capable zef easy to use
zef-p$ cd ..

$ # now we have zef-p available and the rest of the tutorial

writing a plugin

The example we're going to use is a basic view, add, remove for zef-p test configuration. We're going to add the commands zef-p config and zef-p config.test to zef in the rest of this tutorial.

the skeleton

Creating your command is as easy as writing a script and putting it in the bin/ directory of your module. We'll go through the creation of zef-p config and zef-p config.test.

zef-p config

Create a file in your bin/ directory called zef-config.

zef-p config.test

Create a file in your bin/ directory called zef-config.test.

all done!

Not really. Let's add some functionality to zef-config!

implementing zef-p config

Open your bin/zef-config file and add the following

multi MAIN('config') {
  note qq:to/END_USAGE/
    zef config - a plugin for zef config management

    zef config.test

      ls
        List all of the plugins zef has available for testing

      push --module (--comment?) (--short-name?)
        Appends the specified module to the test plugins

      remove index(Int)
        Removes the module from the config with index of Int

      prepend --module (--comment?) (--short-name?)
        Prepends the specified module to the test plugins
  END_USAGE
}

The part above to be most interested in is the use of multi MAIN('config'). This is the function that brings the functionality to zef-p config and it behaves as you'd expect of MAIN. The key part is the keyed in 'config'. When an unknown $command is supplied to zef, say zef-p config out of the box, zef will search for any modules providing bin/zef-$command and will load them so if your module provides three different $command then you will need three bin scripts (or one depending on how symlink handy you are and expect your users to be).

A second note, these plugins are not installed to your perl6 compunit repository. This is to save zef from combing your normal CURs every time a command it doesn't recognize is received.

This will just print out our usage for zef-p config.test. Just like that, when we zef-p install --plugin . (this plugin installation interface will likely change) we can then run zef-p config and it will show our usage. The --plugin option is what tells zef-p to install in its private plugin CUR.

implementing zef-p config.test

To keep the tutorial informative and less about how to write perl, we'll implement the ls feature and add the stubs for push|remove|prepend as described in our usage above.

In your bin/zef-config.test

use Zef;

sub pad (Int $w, Str $val) {
  sprintf(" %-{$w - 1}s", $val);
}

multi MAIN('config.test', 'ls') {
  my $idx = 0;
  my @widths  =
    max(7, $*zef.config.hash<Test>.elems.Str.chars), #indexes
    24,
    20,
    28,
  ;
  my @headers = 'index', 'module', 'short-name', 'comment';

  for |$*zef.config.hash<Test> -> $info {
    @widths[1] = max(@widths[1], ($info<module>//'').chars + 2);
    @widths[2] = max(@widths[2], ($info<short-name>//'').chars + 2);
    @widths[3] = max(@widths[3], ($info<comment>//'').chars + 2);
  }

  say '-' x 2 + ([+] @widths);
  say (0..^@headers.elems).map({ pad(@widths[$_], @headers[$_]) }).join('|');
  say '-' x 2 + ([+] @widths);

  for |$*zef.config.hash<Test> -> $info {
    say (
      pad(@widths[0], $idx++.Str),
      |(1..^@headers.elems).map({ pad(@widths[$_], $info{@headers[$_]}//'') })
    ).join('|');
  }

  say '-' x 2 + ([+] @widths);
}

multi MAIN('config.test', 'remove', Int $index) { ... }

multi MAIN('config.test', 'prepend', Str:D $module, Str :$short-name?, Str :$comment?) { ... }

multi MAIN('config.test', 'push', Str:D $module, Str :$short-name?, Str :$comment?) { ... }

Looking deeper into our ls command, you may notice $*zef. That variable gives access to the config and setup that happens in Zef::CLI. In the ls command we're using it to look at the config that was loaded from zef's configuration file, getting the width of columns to print out, and finally printing out all of the data we find in zef's config regarding Test plugins. The output of that command is displayed later in this post.

make our plugin installable

To make this plugin installable, we need a META6.json file. Here is a barebones template (please, if you're releasing real modules into the ecosystem then put some effort and time into your META6.json files so others can read/understand them).

{
  "perl": "6.d+",
  "name": "Zef::Plugin::PluginManager",
  "auth": "deathbyperl6:the-viewer",
  "description": "a zef plugin that supplies a partial cli to managing zef's config",
  "provides": { }
}

install the plugin

So, we've created our bin/ scripts to provide zef-p config|config.test. Now it's time to install and test it out.

zef-p install --plugin .

As stated previously, --plugin will go away but at this time it's letting zef-p know that it should install our scripts to zef's private plugin CUR. The other thing of note, shown below, is that a plugin's USAGE will override zef's if the plugin successfully loads.

At this point you should be able to run the following commands:

λ ~/projects/zef-plugin-tutorial$ zef-p config
  zef config - a plugin for zef config management

  zef config.test

    ls
      List all of the plugins zef has available for testing

    push --module (--comment?) (--short-name?)
      Appends the specified module to the test plugins

    remove index(Int)
      Removes the module from the config with index of Int

    prepend --module (--comment?) (--short-name?)
      Prepends the specified module to the test plugins

λ ~/projects/zef-plugin-tutorial$ zef-p config.test ls
-------------------------------------------------------------------------------------
 index | module                     | short-name         | comment
-------------------------------------------------------------------------------------
 0     | Zef::Service::TAP          | tap-harness        | Perl6 TAP::Harness adapter
 1     | Zef::Service::Shell::prove | prove              |
 2     | Zef::Service::Shell::Test  | perl6-test         |
-------------------------------------------------------------------------------------

λ ~/projects/zef-plugin-tutorial$ zef-p config.test
Usage:
  /Users/tonyo/projects/zef-plugin-tutorial/zef/bin/zef config.test ls
  /Users/tonyo/projects/zef-plugin-tutorial/zef/bin/zef config.test remove <index>
  /Users/tonyo/projects/zef-plugin-tutorial/zef/bin/zef [--short-name=<Str>] [--comment=<Str>] config.test prepend <module>
  /Users/tonyo/projects/zef-plugin-tutorial/zef/bin/zef [--short-name=<Str>] [--comment=<Str>] config.test push <module>

clean up

If you have installed any modules not using --plugin then they likely went into one of your [pre]configured repositories and you can remove them with zef uninstall if you need to. From here, to get rid of zef-p or start anew, just remove the zef-p directory we created when cloning zef in the setup phase of this post.

conclusion

The code in this repo can be found here

This is a barebones primer to extending the functionality of zef. If you have questions, thoughts, concerns please reach out to me @tony-o in #perl6 on freenode. A .tell in that channel is helpful because I'm not always able to monitor.

Jo Christian Oterhals: What’s even cooler is that with Perl 5.10

Published by Jo Christian Oterhals on 2019-03-28T13:15:19

What’s even cooler is that with Perl 5.10 or later, you can do almost the same, though using the negation of the mod… i.e.

perl -E 'say "Fizz"x!($_%3)."Buzz"x!($_%5)||$_ for 1..20'

Strangely satisfying to see the two perls being this similar.

gfldex: I like Rakudo 100x

Published by gfldex on 2019-03-25T21:06:24

One of my scripts stopped working without any change by my hands with a most peculiar error message:

Type check failed in binding to parameter '$s'; expected Str but got Int (42)
  in sub jpost at /home/bisect/.perl6/sources/674E3526955FCB738B7B736D9DBBD3BD5B162E5C (WWW) line 9
  in block <unit> at wrong-line-or-identifier.p6 line 3

Whereby line 9 looks like this:

@stations = | jpost "https://www.perl6.org", :limit(42);

Rakudo is missing the parameter $s and so am I. Because neither my script nor any routine in WWW does contain it. This is clearly a regression on a rather simple piece of code and in a popular module. Since I didn’t check that script for quite some time I can’t easily tell what Rakudo commit caused it.

In #perl6 we got bisectable6, a member of the ever growing army of useful bots. Yet it could not help me because it doesn’t come with the community modules installed. Testing against a few dozen Rakudo versions by hand was out of question. So I mustered the little bash-foo I have and wrote a few scripts to build Rakudos past. This resulted in #2779.

If you wish to go on a bug hunt for time travelers too, clone the scripts, install the modules your script needs and make sure it fails with an exit code greater 0. Then run ./build-head-to-tail.sh <nr-of-commits> to build as many Rakudos as you like. With ./run-head-to-tail <nr-of-commits> <your-script-name-here>. Up to the number of cores of the host tests are run in parallel. After a while you get a list of OK, FAILed and SKIPed commits. Any Rakudo commit that fails to build will be SKIPed.

Running as root may not work because the modules will be put in the wrong spot by zef. A single commit will take about 70MB of disk space with little hope for deduplication.

The brave folk who push Perl 5 ever forward have a whole CPAN worth of tests to check if anything breaks while they change the compiler. Our stretch of land is still quite small in comparison but I hope to have helped with testing it better.

Weekly changes in and around Perl 6: 2019.12 Cool Truck!

Published by liztormato on 2019-03-25T15:37:12

Tom Browder has proudly shown off his brand new Perl 6 vehicle tag. I guess the camel got modernized with a 6-speed automatic and air-conditioning 🙂

Welcome, Kane Valentine

Kane Valentine (also known as kawaii) has become the latest Perl 6 core developer. Looking forward to see more of his work on the Perl 6 core. As a really good start, Kane has committed to doing the next Rakudo Compiler Release!

Request for Members

The Perl Foundation Grant Committee, the people who vote on whether a grant proposal to the Perl Foundation is accepted or not, are looking for new members. Voting members review proposals every two months, including community feedback, and vote on whether to approve/fund the grant. Please leave a comment, or even better make your intent to be a Grant Committee member clear to Grant Committee chairman Will “Coke” Coleda.

Perl Conference 2019 Newsletter

The organizers of the Perl Conferences in Pittsburgh, PA (16-21 June) have published their March Newsletter, with requests to YAPC “regulars” and volunteers alike! And the fact that the early bird pricing will be available until the 15th of May! And a Golden Ticket option! Check it out!

Perl Weekly Challenge

The first Perl Weekly Challenge has been published. Check out Mohammad S Anwar‘s Perl Weekly FAQ for more information. Provided are a challenge for Beginners, and a challenge for Experts. Solutions can be given in either Perl 5 or Perl 6. Let the games begin!

London Perl Workshop Videos

The videos of the London Perl Workshop 2018 have been uploaded. The Perl 6 ones are:

Too bad only the presentations in the main room were recorded, so these Perl 6 related presentations:

were sadly not recorded.

Nightly Docker builds

Tony O’Dell has set up a nightly Perl 6 build on DockerHub. A quick and cool way to be able to test your stuff on the bleeding edge of Perl 6 development!

Reverse Linear Scan Allocation

Bart Wiegmans reports on the feedback that he got on his previous blog post in a new blog post. Food for compiler builder lovers! (/r/ProgrammingLanguages, /r/Compilers comments).

Heap Snapshots

Timo Paulssen reports on his progress of the heap snapshot profiler and how a new storage format reduced the size of a series of snapshots from 1.1 Gbyte to less than 100 Mbyte. Which implies you can save snapshots for a longer period before the size of the snapshots becomes really unwieldy! Can’t wait for the next update!

What’s in an ORM

Tony O’Dell also blogged about his ORM named DB::Xoos. He shows the key features of DB::Xoos, such as flexible configuration, relational modeling made easy, convenience methods and validation. Recommended reading for people who like Perl 5’s DBIx::Class.

Colonoscopy

Arne Sommer is on a roll. In the third article of his recently started blog, he describes all the different ways the colon can be used in Perl 6 (Reddit comments).

Rakudo Star RC2 available

Some issues were found with the first Rakudo Star 2019.03 candidate, so there is now a second 2019.03 Rakudo Star candidate. Please download and check it out on as many systems you can, and report any issues you may find. Thank you!

Missing math/statistics functions

Aleks-Daniel Jakimenko-Aleksejev started an issue about missing math / statistics functions, such as clip (or clamp), mean, median. Comments welcome!

Core Developments

Questions about Perl 6

Meanwhile on Facebook

Meanwhile on perl6-users

Meanwhile on Twitter

Perl 6 in comments

Perl 6 Modules

Cool to see the number of new modules exceeds the number of updates! New modules:

Updated modules:

Winding Down

So many cool things this week: relocatability (seen by many as a prerequisite for proper packaging), nightly Docker builds, a new core developer and releaser and more new modules than updated ones. Yours truly likes to see that very much! More about that next week!

my Timotimo \this: Intermediate Progress Report: Heap Snapshots

Published by Timo Paulssen on 2019-03-22T22:22:00

Intermediate Progress Report: Heap Snapshots

Hello dear readers,

this time I don't have something finished to show off, but nevertheless I can tell you all about the work I've recently started.

The very first post on this blog already had a little bit about the heap snapshot profiler. It was about introducing a binary format to the heap snapshot profiler so that snapshots can be written out immediately after they were taken and by moarvm itself rather than by NQP code. This also made it possible to load snapshot data faster: the files contain an index that allows a big chunk of data to be split up exactly in the middle and then read and decoded by two threads in parallel.

The new format also resulted in much smaller output files, and of course reduced memory usage of turning on the profiler while running perl6 code. However, since it still captures one heap snapshot every time the GC runs, every ordinary program that runs for longer than a few minutes will accumulate quite an amount of data. Heap snapshot files (which are actually collections of multiple snapshots) can very easily outgrow a gigabyte. There would have to be another change.

Intermediate Progress Report: Heap Snapshots
Photo by Waldemar Brandt / Unsplash

Enter Compression

The new format already contained the simplest thing that you could call compression. Instead of simply writing every record to the file as it comes, records that have smaller numbers would be stored with a shorter representation. This saved a lot of space already, but not nearly as much as off-the-shelf compression techniques would.

There had to be another way to get compression than just coming up with my own compression scheme! Well, obviously I could have just implemented something that already exists. However, at the time I was discouraged by the specific requirements of the heap snapshot analyzer - the tool that reads the files to let the user interactively explore the data within it:

Normally, compression formats are not built to support easily seeking to any given spot in the uncompressed data. There was of course the possibility to compress each snapshot individually, but that would mean a whole snapshot could either only be read in with a single thread, or the compression would have to go through the whole blob and when the first splittable piece was decompressed, a thread could go off and parse it. I'm not entirely sure why I didn't go with that, perhaps I just didn't think of it back then. After all, it's already been more than a year, and my brain compresses data by forgetting stuff.

Anyway, recently I decided I'd try a regular compression format for the new moarvm heap snapshot file format. There's already a Perl 6 module named Compress::Zlib, which I first wanted to use. Writing the data out from moarvm was trivial once I linked it to zlib. Just replace fopen with gzopen, fwrite with gzwrite, fclose with gzclose and you're almost done! The compression ratio wasn't too shabby either.

When I mentioned this in the #moarvm channel on IRC, I was asked why I use zlib instead of zstd. After all, zstd usually (or always?) outperforms zlib in both compression/decompression speed and output size. The only answer I had for that was that I hadn't used the zstd C library yet, and there was not yet a Perl 6 module for it.

Figuring out zstd didn't go as smoothly as zlib, not by a long shot. But first I'd like to explain how I solved the issue of reading the file with multiple threads.

Restructuring the data

In the current binary format, there are areas for different kinds of objects that occur once per snapshot. Those are collectables and references. On top of that there are objects that are shared across snapshots: Strings that are referenced from all the other kinds of objects (for example filenames, or descriptions for references like "hashtable entry"), static frames (made up of a name like "push", an ID, a filename, and a line number), and types (made up of a repr name like VMArray, P6opaque, or CStruct and a type name like BagHash or Regex).

That resulted in a file format that has one object after the other in the given area. The heap snapshot analyzer itself then goes through these areas and splits the individual values apart, then shoves them off into a queue for another thread to actually store. Storage inside the heap analyzer consists of one array for each part of these objects. For example, there is one array of integers for all the descriptions and one array of integers for all the target collectables. The main benefit of that is not having to go through all the objects when the garbage collector runs.

The new format on the other hand puts every value of each attribute in one long string before continuing with the next attribute.

Here's how the data for static frames was laid out in the file in the previous version:

"sframes", count, name 1, uuid 1, file 1, line 1, name 2, uuid 2, file 2, line 2, name 3, uuid 3, file 3, line 3, … index data

The count at the beginning tells us how many entries we should be expecting. For collectables, types, reprs, and static frames this gives us the exact number of bytes to look for, too, since every entry has the same size. References on the other hand have a simple "compression" applied to them, which doesn't allow us to just figure out the total size by knowing the count. To offset this, the total size lives at the end in a place that can easily be found by the parser. Strings are also variable in length, but there's only a few hundred of them usually. References take up the most space in total; having four to five times as many references as there are collectables is pretty normal.

Here's how the same static frame data is laid out in the upcoming format:

"names", length, zstd(name 1, name 2, name 3, …), index data, "uuids", length, zstd(uuid 1, uuid 2, uuid 3, …), index data, "files", length, zstd(file 1, file 2, file 3, …), index data, "lines", length, zstd(line 1, line 2, line 3, …), index data

As you can see, the values for each attribute now live in the same space. Each attribute blob is compressed individually, each has a little piece of index data at the end and a length field at the start. The length field is actually supposed to hold the total size of the compressed blob, but if the heap snapshot is being output to a destination that doesn't support seeking (moving back in the file and overwriting an earlier piece of data) we'll just leave it zeroed out and rely on zstd's format being able to tell when one blob ends.

There are some benefits to this approach that I'd like to point out:

The last point, in fact, will let me put some extra fun into the files. First of all, I currently start the files with a "plain text comment" that explains what kind of file it is and how it's structured internally. That way, if someone stumbles upon this file in fifty years, they can get started finding out the contents right away!

On a perhaps more serious note, I'll put in summaries of each snapshot that MoarVM itself can just already generate while it's writing out the snapshot itself. Not only things like "total number of collectables", but also "top 25 types by size, top 25 types by count". That will actually make the heapanalyzer not need to touch the actual data until the user is interested in more specific data, like a "top 100" or "give me a thousand objects of type 'Hash'".

On top of that, why not allow the user to "edit" heap snapshots, put some comments in before sharing it to others, or maybe "bookmarking" specific objects?

All of these things will be easier to do with the new format - that's the hope at least!

Did the compression work?

I didn't actually have the patience to exhaustively measure all the details, but here's a rough ratio for comparison: One dataset I've got results in a 1.1 gigabytes big file with the current binary format, a 147 megabytes big file when using gzip and a 99 megabytes big file using zstd (at the maximum "regular" compression level - I haven't checked yet if the cpu usage isn't prohibitive for this, though).

It seems like this is a viable way forward! Allowing capture to run 10x as long is a nice thing for sure.

What comes next?

The profiler view itself in Moarperf isn't done yet, of course. I may not put in more than the simplest of features if I start on the web frontend for the heap analyzer itself.

On the other hand, there's another task that's been waiting: Packaging moarperf for simpler usage. Recently we got support for a relocatable perl6 binary merged into master. That should make it possible to create an AppImage of moarperf. A docker container should also be relatively easy to build.

We'll see what my actual next steps will be - or will have been I suppose - when I post the next update!

Thanks for reading and take care
  - Timo

Death by Perl6: What's in an ORM, aka DB::Xoos

Published by Tony O'Dell on 2019-03-22T17:46:55

Using an ORM makes your code look a little cleaner. Today we can take a look at Xoos, an ORM aimed at making your life simpler, your code cleaner, and not getting in the way of your development.

This article will cover implementing an ERD in your code and show example uses.

Key features of Xoos:

quick terminology

model

You can think of this in database terms as a table but in the rest of the article a model is meant to mean the class+{convenience methods}+{optional yaml}

If you're coming from DBIx this can be thought of the same as a resultclass.

row

Think of this in terms of the row class+{convenience methods}+{columns inherited from the model}

set up

Using zef we'll install Xoos (if YAML::Parser::LibYAML fails to install or you don't want to install libyaml, then don't fret - you don't need it and it's just in the article to demonstrate YAML models).

zef install DB::Xoos YAML::Parser::LibYAML

SQLite3, we'll create the following ERD.

Database-ER-Diagram--4-

example$ sqlite3 xoos.sqlite3
SQLite version 3.19.3 2017-06-27 16:48:08
Enter ".help" for usage hints.

sqlite> create table books (book_id integer primary key autoincrement, title varchar(64) not null, author_id integer not null, date_published date, price float not null);

sqlite> create table authors (author_id integer primary key autoincrement, name varchar(64) not null, birth_date date);

sqlite> create table customers (customer_id integer primary key autoincrement, email_address varchar(64) not null, name varchar(64) not null, date_registered date default (datetime('now','localtime')));

sqlite> create table orders (order_id integer primary key autoincrement, customer_id integer not null, order_date date default (datetime('now','localtime')));

sqlite> create table order_details (order_detail_id integer primary key autoincrement, order_id integer not null, book_id integer not null);

sqlite> .quit

That's it for setup. Now we can start using Xoos and implementing our application.

commence coding!

This example is going to contain a lib directory and a bunch of other scripts for doing useful things with our models/rows.

ORM setup

Now we must create our ORM classes.

We'll develop two of the models, one with standard perl6 and one using YAML, the rest of ERD can be implemented as a practice or the entire project can be downloaded at the end of the article.

App::Model::Book

In our path we're going to use the prefix of App for our models and rows. If you haven't already and you're following along, then create the directory lib/App/Model and then edit the file lib/App/Model/Book.pm6.

use DB::Xoos::Model;
unit class App::Model::Book does DB::Xoos::Model['books'];

qw<...>

Here we are simply defining our class and telling DB::Xoos to use the table books as our source of data for this model. Next we'll let the ORM know what columns are available.

has @.columns = [
  book_id => {
   type           => 'integer',
   is-primary-key => True,
   auto-increment => True,
  },
  title => {
    type     => 'varchar',
    nullable => False,
    length   => '64',
  },
  author_id => {
    type   => 'varchar',
    nullable => False,
    length => '64',
  },
  date_published => {
    type   => 'date',
  },
  price => {
    type   => 'float',
  },
];

Fairly straight forward. Using is-primary-key on multiple columns will allow you to key off of multiple columns, creating a compounded key.

has @.relations = [
  authors => {
    :has-one,
    :model<Author>,
    :relate(author_id => 'author_id'),
  },
  order_details => {
    :has-many,
    :model<OrderDetails>,
    :relate(book_id => 'book_id'),
  },
];

Here we're defining the relationships that models have with other models. We have a 1:N with :has-many and a 1:1 with :has-one.

After that our model is defined and if we have no row methods we would like to implement then we're done.

App::Model::OrderDetail

If you haven't already and you're following along, then create and edit the file models/orderdetail.yaml.

table: order_details
name: OrderDetail
columns:
  order_detail_id:
    type: integer
    nullable: false
    is-primary-key: true
    auto-increment: true
  order_id:
    type: integer
    nullable: false
  book_id:
    type: integer
    nullable: false
relations:
  book:
    has-one: true
    model: Book
    relate:
      book_id: book_id
  order:
    has-one: true
    model: Order
    relate:
      order_id: order_id

As you can see this is essentially doing what we did above without necessitating the explicit creation of the entire class.

Note: using yaml doesn't preclude you from putting model methods in a class App::Model::OrderDetail but it can make your model class methods look cleaner!

App::Model::*

The rest of the model creations would be redundant so, the rest of the exercise is left as just that (or it's available at the end of the article).

That's it! Your models are ready to be used, the rest of the code is going to be in scripts placed in the bin/ folder.

Example Usage

First ORM usage

We'll list what models we have available to us from ORM, this script is called show-models. All of the following examples will assume you've done the same code up to and including the .connect and will be omitted for brevity.

use DB::Xoos::SQLite;

my $xoos = DB::Xoos::SQLite.new;

$xoos.connect('sqlite://xoos.sqlite3', :options({
    :prefix<App>,               # loads classes from App::Model::*
    model-dirs => [qw<models>], # for our yaml column definitions
}));

"Successfully loaded models:\n  {$xoos.loaded-models.join("\n  ")}".say;

This script should output something like:

Successfully loaded models:
  Customer
  Author
  OrderDetail
  Order
  Books

While not entirely useful, it does let us know that everything loaded well. Now let's fill up our tables.

Loading data into tables from CSV

This script will use Text::CSV and load data from data/books.csv and generate other made up data. This is a truncated version because it's mostly long and repititous for our use. Generating customers, orders, and order details is left to the user or, again, you can find a full script in the file at the end of the article.

qw<snipped setup for Xoos>;
use Text::CSV;

my @rows    = csv(:in('data/books.csv'));

my $book = $xoos.model('Book');
my $auth = $xoos.model('Author');

# generate book and author data from csv
for @rows[1..*] -> $row {

  # determine whether an author by that name exists, this isn't
  # robust as there may be many authors by the same name
  my $author = $auth.search({
    name => $row[1] eq '' ?? 'unknown' !! $row[1];
  });

  # if an author was found, use it
  if $author.count {
    $author .=first;
  } else {
    $author = $auth.new-row;
    $author.name($row[1] eq '' ?? 'unknown' !! $row[1]);
    $author.update;
  }
  
  # create our book and generate a random price
  $book.new-row({
    author_id => $author.author_id,
    title     => $row[0],
    price     => (299 .. 5000).pick(1)[0] / 100,
  }).update;
}

Viewing or finding customer information

In this example we'll be using a relationship on customer to get the number of orders and how many items each order contained. The script also uses a convenience method built upon the App::Row::Customer to show how many total items they've ordered.

use DB::Xoos::SQLite;
my DB::Xoos::SQLite $db .=new;


multi MAIN(:$email?, :$name, :$customer-id) {

  $db.connect('sqlite://xoos.sqlite3', :options({
    model-dirs => [qw<models>],
    prefix     => 'App',
  }));


  my %search;
  my $customers = $db.model('Customer');

  %search<customer_id> = $customer-id if $customer-id.defined;
  %search<name>        = %(
    'like' => '%' ~ $name ~ '%',
  ) if $name.defined;
  %search<email_address>        = %(
    'like' => '%' ~ $email ~ '%',
  ) if $email.defined;

  if %search.keys {
    $customers .=search(%search);
    say "Searching with params:";
    dd %search;
  }
  say 'Customer info:';
  printf "| %-14s | %-17s | %-14s | %-12s | %-14s |\n", qw<customer_id email name orders order-items>;
  say 'x' x 87;

  for $customers.all -> $customer {
    printf "| %-14s | %-17s | %-14s | %-12s | %-14s |\n",
      $customer.customer_id,
      $customer.email_address,
      $customer.name,
      $customer.orders.count,
      $customer.total-order-items; # this uses the row level convenience method in App::Row::Customer
  }

  say 'x' x 87;

}

If you have questions or would like to see more, exploring this or other topics, please ping me @tony-o in #perl6 on freenode.

Here is a link to files for this guide github repo, zip file

brrt to the future: Reverse Linear Scan Allocation is probably a good idea

Published by Bart Wiegmans on 2019-03-21T22:52:00

Hi hackers! Today First of all, I want to thank everybody who gave such useful feedback on my last post.  For instance, I found out that the similarity between the expression JIT IR and the Testarossa Trees IR is quite remarkable, and that they have a fix for the problem that is quite different from what I had in mind.

Today I want to write something about register allocation, however. Register allocation is probably not my favorite problem, on account of being both messy and thankless. It is a messy problem because - aside from being NP-hard to solve optimally - hardware instruction sets and software ABI's introduce all sorts of annoying constraints. And it is a thankless problem because the case in which a good register allocator is useful - for instance, when there's lots of intermediate values used over a long stretch of code - are fairly rare. Much more common are the cases in which either there are trivially sufficient registers, or ABI constraints force a spill to memory anyway (e.g. when calling a function, almost all registers can be overwritten).

So, on account of this being not my favorite problem, and also because I promised to implement optimizations in the register allocator, I've been researching if there is a way to do better. And what better place to look than one of the fastest dynamic language implementations arround, LuaJIT? So that's what I did, and this post is about what I learned from that.

Truth be told, LuaJIT is not at all a learners' codebase (and I don't think it's author would claim this). It uses a rather terse style of C and lots and lots of preprocessor macros. I had somewhat gotten used to the style from hacking dynasm though, so that wasn't so bad. What was more surprising is that some of the steps in code generation that are distinct and separate in the MoarVM JIT - instruction selection, register allocation and emitting bytecode - were all blended together in LuaJIT. Over multiple backend architectures, too. And what's more - all these steps were done in reverse order - from the end of the program (trace) to the beginning. Now that's interesting...

I have no intention of combining all phases of code generation like LuaJIT has. But processing the IR in reverse seems to have some interesting properties. To understand why that is, I'll first have to explain how linear scan allocation currently works in MoarVM, and is most commonly described:

  1. First, the live ranges of program values are computed. Like the name indicates, these represent the range of the program code in which a value is both defined and may be used. Note that for the purpose of register allocation, the notion of a value shifts somewhat. In the expression DAG IR, a value is the result of a single computation. But for the purposes of register allocation, a value includes all its copies, as well as values computed from different conditional branches. This is necessary because when we actually start allocating registers, we need to know when a value is no longer in use (so we can reuse the register) and how long a value will remain in use -
  2. Because a value may be computed from distinct conditional branches, it is necessary to compute the holes in the live ranges. Holes exists because if a value is defined in both sides of a conditional branch, the range will cover both the earlier (in code order) branch and the later branch - but from the start of the later branch to its definition that value doesn't actually exist. We need this information to prevent the register allocator from trying to spill-and-load a nonexistent value, for instance.
  3. Only then can we allocate and assign the actual registers to instructions. Because we might have to spill values to memory, and because values now can have multiple definitions, this is a somewhat subtle problem. Also, we'll have to resolve all architecture specific register requirements in this step.
In the MoarVM register allocator, there's a fourth step and a fifth step. The fourth step exists to ensure that instructions conform to x86 two-operand form (Rather than return the result of an instruction in a third register, x86 reuses one of the input registers as the output register. E.g. all operators are of the form a = op(a, b)  rather than a = op(b, c). This saves on instruction encoding space). The fifth step inserts instructions that are introduced by the third step; this is done so that each instruction has a fixed address in the stream while the stream is being processed.

Altogether this is quite a bit of complexity and work, even for what is arguably the simplest correct global register allocation algorithm. So when I started thinking of the reverse linear scan algorithm employed by LuaJIT, the advantages became clear:
There are downsides as well, of course. Not knowing exactly how long a value will be live while processing it may cause the algorithm to make worse choices in which values to spill. But I don't think that's really a great concern, since figuring out the best possible value is practically impossible anyway, and the most commonly cited heuristic - evict the value that is live furthest in the future, because this will release a register over a longer range of code, reducing the chance that we'll need to evict again - is still available. (After all, we do always know the last use, even if we don't necessarily know the first definition).

Altogether, I'm quite excited about this algorithm; I think it will be a real simplification over the current implementation. Whether that will work out remains to be seen of course. I'll let you know!

Weekly changes in and around Perl 6: 2019.11 Complete Course

Published by liztormato on 2019-03-18T16:13:19

Andrew Shitov would like to know how you like his grant proposal for creating a complete course with exercises covering all aspects of Perl 6, aimed at everyone who is familiar with programming. A course that can be used in self-studying or as a platform for a class. Be sure to leave your comments! (Endorsement on Twitter).

February Grant Report

Jonathan Worthington reports on his progress in the month of February: about escape analysis, scalar replacement and more aggressive optimizations of inlined code.

Linter for YAML files

Alexey Melezhik introduces a new Sparrow6 plugin that checks the validity of YAML files. Good to be aware of if you’re editing YAML files on a regular basis.

Gibberish

Arne Sommer has written a blog post about why and how to create gibberish using Perl 6. All to be part of the 2 day course “Beginning Perl 6” he’ll be giving at the European PerlCon in Riga in August. (Reddit, Twitter comments).

Aλhambra Day

An event about functional programming in Granada on the 6th of April, will see a presentation about functional programming in Perl 6 by none other than Elena Merelo! (En Español)

Videos from TechMeet

Two Perl 6 videos from the last London.pm Technical Meeting:

Impressions from GPW 2019

Martin Becker shared his impressions about the German Perl Workshop in Munich. With some good and some not so good about Perl 6. And the Worst Pie Chart Ever!

European PerlCon news

Two more sponsors: perlmaven.com and The Perl Shop. Also, time is running out on Early Bird pricing of tickets. And don’t forget to check out the brilliant PerlCon Teaser for Jonathan Worthington‘s workshop!.

Twin Projects

Mohammad S Anwar describes how two projects kept him busy in the past week: one of them being another London Hack Day, and the other about a Weekly Perl (5 or 6) challenge.

Why operators are useful

Guido van Rossum has written a blog post on why operators are useful, and whether or not adding an operator for merging two hashes is a sensible thing to do. It also mentions Perl, presumably Perl 6. (Twitter, Reddit comments).

Types are moving to the right

Roman Elizarov takes a look at a lot of older and newer programming languages and comes to the conclusion that modern languages specify their types to the right of the variable. (Twitter, Reddit comments).

Something about IR optimization

Bart Wiegmans has blogged about his progress on optimizing the intermediate representation of code. Very graphic, hard core stuff!

Core Developments

Questions about Perl 6

Meanwhile on Facebook

Meanwhile on perl6-users

Meanwhile on Twitter

Perl 6 in comments

Perl 6 Modules

New modules:

Updated modules:

Winding Down

A nice week with plenty of thought-provoking blog posts and comments. And some nice new optimizations as well. Be sure to tune in next week for more Perl 6 news!

brrt to the future: Something about IR optimization

Published by Bart Wiegmans on 2019-03-17T13:23:00

Hi hackers! Today I want to write about optimizing IR in the MoarVM JIT, and also a little bit about IR design itself.

One of the (major) design goals for the expression JIT was to have the ability to optimize code over the boundaries of individual MoarVM instructions. To enable this, the expression JIT first expands each VM instruction into a graph of lower-level operators. Optimization then means pattern-matching those graphs and replacing them with more efficient expressions.

As a running example, consider the idx operator. This operator takes two inputs (base and element) and a constant parameter scale and computes base+element*scale. This represents one of the operands of an  'indexed load' instruction on x86, typically used to process arrays. Such instructions allow one instruction to be used for what would otherwise be two operations (computing an address and loading a value). However, if the element of the idx operator is a constant, we can replace it instead with the addr instruction, which just adds a constant to a pointer. This is an improvement over idx because we no longer need to load the value of element into a register. This saves both an instruction and valuable register space.

Unfortunately this optimization introduces a bug. (Or, depending on your point of view, brings an existing bug out into the open). The expression JIT code generation process selects instructions for subtrees (tile) of the graph in a bottom-up fashion. These instructions represent the value computed or work performed by that subgraph. (For instance, a tree like (load (addr ? 8) 8) becomes mov ?, qword [?+8]; the question marks are filled in during register allocation). Because an instruction is always represents a tree, and because the graph is an arbitrary directed acyclic graph, the code generator projects that graph as a tree by visiting each operator node only once. So each value is computed once, and that computed value is reused by all later references.

It is worth going into some detail into why the expression graph is not a tree. Aside from transformations that might be introduced by optimizations (e.g. common subexpression elimination), a template may introduce a value that has multiple references via the let: pseudo-operator. See for instance the following (simplified) template:

(let: (($foo (load (local))))
    (add $foo (sub $foo (const 1))))

Both ADD and SUB refer to the same LOAD node


In this case, both references to $foo point directly to the same load operator. Thus, the graph is not a tree. Another case in which this occurs is during linking of templates into the graph. The output of an instruction is used, if possible, directly as the input for another instruction. (This is the primary way that the expression JIT can get rid of unnecessary memory operations). But there can be multiple instructions that use a value, in which case an operator can have multiple references. Finally, instruction operands are inserted by the compiler and these can have multiple references as well.

If each operator is visited only once during code generation, then this may introduce a problem when combined with another feature - conditional expressions. For instance, if two branches of a conditional expression both refer to the same value (represented by name $foo) then the code generator will only emit code to compute its value when it encounters the first reference. When the code generator encounters $foo for the second time in the other branch, no code will be emitted. This means that in the second branch, $foo will effectively have no defined value (because the code in the first branch is never executed), and wrong values or memory corruption is then the predictable result.

This bug has always existed for as long as the expression JIT has been under development, and in the past the solution has been not to write templates which have this problem. This is made a little easier by a feature the let: operator, in that it inserts a do operator which orders the values that are declared to be computed before the code that references them. So that this is in fact non-buggy:

(let: (($foo (load (local))) # code to compute $foo is emitted here
  (if (...)  
    (add $foo (const 1)) # $foo is just a reference
    (sub $foo (const 2)) # and here as well

The DO node is inserted for the LET operator. It ensures that the value of the LOAD node is computed before the reference in either branch


Alternatively, if a value $foo is used in the condition of the if operator, you can also be sure that it is available in both sides of the condition.

All these methods rely on the programmer being able to predict when a value will be first referenced and hence evaluated. An optimizer breaks this by design. This means that if I want the JIT optimizer to be successful, my options are:

  1. Fix the optimizer so as to not remove references that are critical for the correctness of the program
  2. Modify the input tree so that such references are either copied or moved forward
  3. Fix the code generator to emit code for a value, if it determines that an earlier reference is not available from the current block.
In other words, I first need to decide where this bug really belongs - in the optimizer, the code generator, or even the IR structure itself. The weakness of the expression IR is that expressions don't really impose a particular order. (This is unlike the spesh IR, which is instruction-based, and in which every instruction has a 'previous' and 'next' pointer). Thus, there really isn't a 'first' reference to a value, before the code generator introduces the concept. This is property is in fact quite handy for optimization (for instance, we can evaluate operands in whatever order is best, rather than being fixed by the input order) - so I'd really like to preserve it. But it also means that the property we're interested in - a value is computed before it is used in, in all possible code flow paths - isn't really expressible by the IR. And there is no obvious local invariant that can be maintained to ensure that this bug does not happen, so any correctness check may have to check the entire graph, which is quite impractical.

I hope this post explains why this is such a tricky problem! I have some ideas for how to get out of this, but I'll reserve those for a later post, since this one has gotten quite long enough. Until next time!

Perl 6 Inside Out: 📺 Perl 6 One-Liners slides

Published by andrewshitov on 2019-03-08T13:11:34

Here are the slides of my talk at the German Perl Workshop 2019 in Munich, which summarises a Christmas series of posts written last year. Lots of Perl 6 one-liners and related stuff such as the usage of the MAIN function. Video recording should also appear soon.


Perl 6 Inside Out: 📺 Creating a compiler in Perl 6

Published by andrewshitov on 2019-03-07T22:02:59

At the German Perl Workshop 2019 in Munich, I gave a presentation about how to create compilers and interpreters using Perl 6 grammars. This talk differs from my previous talks on this subject, so if you’ve seen them, I hope you will also enjoy this one. There was a video recording during the talk, I assume that the video will appear reasonably soon.

my Timotimo \this: Always Wear Safety Equipment When Inline Scalaring

Published by Timo Paulssen on 2019-02-21T20:44:09

Always Wear Safety Equipment When Inline Scalaring

MoarVM just recently got a nice optimization merged into the master branch. It's called "partial escape analysis" and comes with a specific optimization named "scalar replacement". In simple terms, it allows objects whose lifetime can be proven to end very soon to be created on the stack instead of in the garbage collector managed heap. More than just that, the "partial" aspect of the escape analysis even allows that when the object can escape out of our grasp, but will not always do so.

Allocation on the stack is much cheaper than allocation in the garbage collected heap, because freeing data off of the stack is as easy as leaving the function behind.

Always Wear Safety Equipment When Inline Scalaring
Photo by A. Zuhri / Unsplash

Like a big portion of optimizations in MoarVM's run-time optimizer/specializer ("spesh"), this analysis and optimization usually relies on some prior inlining of code. That's where the pun in the post's title comes from.

This is a progress report for the profiler front-end, though. So the question I'd like to answer in this post is how the programmer would check if these optimizations are being utilized in their own code.

A Full Overview

The first thing the user gets to see in the profiler's frontend is a big overview page summarizing lots of results from all aspects of the program's run. Thread creations, garbage collector runs, frame allocations made unnecessary by inlining, etc.

That page just got a new entry on it, that sums up all allocations and also shows how many allocations have been made unnecessary by scalar replacement, along with a percentage. Here's an example screenshot:

Always Wear Safety Equipment When Inline Scalaring

That on its own is really only interesting if you've run a program twice and maybe changed the code in between to compare how allocation/optimization behavior changed in between. However, there's also more detail to be found on the Allocations page:

Per-Type and Per-Routine Allocations/Replacements

The "Allocations" tab already gave details on which types were allocated most, and which routines did most of the allocating for any given type. Now there is an additional column that gives the number of scalar replaced objects as well:

Always Wear Safety Equipment When Inline Scalaring

Here's a screenshot showing the details of the Num type expanded to display the routines:

Always Wear Safety Equipment When Inline Scalaring

Spesh Performance Overview

One major part of Spesh is its "speculative" optimizations. They have the benefit of allowing optimizations even when something can't be proven. When some assumption is broken, a deoptimization happens, which is effectively a jump from inside an optimized bit of code back to the same position in the unoptimized code. There's also "on-stack-replacement", which is effectively a jump from inside unoptimized code to the same position in optimized code. The details are, of course, more complicated than that.

How can you find out which routines in your program (in libraries you're calling, or the "core setting" of all builtin classes and routines) are affected by deoptimization or by OSR? There is now an extra tab in the profiler frontend that gives you the numbers:

Always Wear Safety Equipment When Inline Scalaring

This page also has the first big attempt at putting hopefully helpful text directly next to the data. Below the table there's these sections:

Specializer Performance

MoarVM comes with a dynamic code optimizer called "spesh". It makes your code faster by observing at run time which types are used where, which methods end up being called in certain situations where there are multiple potential candidates, and so on. This is called specialization, because it creates versions of the code that take shortcuts based on assumptions it made from the observed data.

Deoptimization

Assumptions, however, are there to be broken. Sometimes the optimized and specialized code finds that an assumption no longer holds. Parts of the specialized code that detect this are called "guards". When a guard detects a mismatch, the running code has to be switched from the optimized code back to the unoptimized code. This is called a "deoptimization", or "deopt" for short.

Deopts are a natural part of a program's life, and at low numbers they usually aren't a problem. For example, code that reads data from a file would read from a buffer most of the time, but at some point the buffer would be exhausted and new data would have to be fetched from the filesystem. This could mean a deopt.

If, however, the profiler points out a large amount of deopts, there could be an optimization opportunity.

On-Stack Replacement (OSR)

Regular optimization activates when a function is entered, but programs often have loops that run for a long time until the containing function is entered again.

On-Stack Replacement is used to handle cases like this. Every round of the loop in the unoptimized code will check if an optimized version can be entered. This has the additional effect that a deoptimization in such code can quickly lead back into optimized code.

Situations like these can cause high numbers of deopts along with high numbers of OSRs.

I'd be happy to hear your thoughts on how clear this text is to you, especially if you're not very knowledgeable about the topics discussed. Check github for the current version of this text - it should be at https://github.com/timo/moarperf/blob/master/frontend/profiler/components/SpeshOverview.jsx#L82 - and submit an issue to the github repo, or find me on IRC, on the perl6-users mailing list, on reddit, on mastodon/the fediverse, twitter, or where-ever.

Perl 6 Inside Out: 📺 Perl 6 as a new tool for language compilers

Published by andrewshitov on 2019-02-04T08:09:17

Here’s my recent talk from FOSDEM in Brussels, given on 3 February 2019.

Perl 6 grammars are a great way to describe the grammar and implement an interpreter or a compiler of DSL or a programming language. In this talk, I will demonstrate how you can do it. During the talk, we will create an interpreter for a tiny programming language.

The engine behind the implementation will be the so-called Grammars that are available in today’s Perl 6. We will create the full language specification and describe all the actions it needs to do to execute the program.

The great part is that you no longer need to split your language implementation in traditional phases: lexer, parser, etc. Neither you need a compiler of compilers to process the formal grammar rules and emit the lexer/parse code that you will later use in your compiler. All you need is just to write some Perl 6 code. You even don’t need to be a specialist in compilers or learn numerous tools like bizon etc. to create your own language in a few hours.

gfldex: Threading nqp through a channel

Published by gfldex on 2019-02-03T21:30:21

Given that nqp is faster then plain Perl 6 and threads combining the two should give us some decent speed. Using a Supply as promised in the last post wouldn’t really help. The emit will block until the internal queue of the Supply is cleared. If we want to process files recursively the filesystem might stall just after the recursing thread is unblocked. If we are putting pressure on the filesystem in the consumer, we are better of with a Channel that is swiftly filled with file paths.

Let’s start with a simulated consumer that will stall every now end then and takes the Channel in $c.

my @files;
react {
whenever $c -> $path {
@files.push: $path;
sleep 1 if rand < 0.00001;
}
}

If we would pump out paths as quickly as possible we could fill quite a bit of RAM and put a lot of pressure on the CPU caches. After some trial and error I found that sleeping befor the .send on the Channel helps when there are more then 64 worker threads waiting to be put onto machine threads. That information is accessible via Telemetry::Instrument::ThreadPool::Snap.new<gtq>.

my $c = Channel.new;
start {
my @dirs = '/snapshots/home-2019-01-29';
  while @dirs.shift -> str $dir {
  my Mu $dirh := nqp::opendir(nqp::unbox_s($dir));
  while my str $name = nqp::nextfiledir($dirh) {
  next if $name eq '.' | '..';
  my str $abs-path = nqp::concat( nqp::concat($dir, '/'), $name);
  next if nqp::fileislink($abs-path);
  if Telemetry::Instrument::ThreadPool::Snap.new<gtq> > 64 {
say Telemetry::Instrument::ThreadPool::Snap.new<gtq>;
  say 'sleeping';
sleep 0.1;
}
$c.send($abs-path) if nqp::stat($abs-path, nqp::const::STAT_ISREG);
@dirs.push: $abs-path if nqp::stat($abs-path, nqp::const::STAT_ISDIR);
  }
  CATCH { default { put BOLD .Str, ' ⟨', $dir, '⟩' } }
  nqp::closedir($dirh); }
  $c.close;
}

Sleeping for 0.1s before sending the next value is a bit naive. It would be better to watch the number of waiting workers and only continue when it has dropped below 64. But that is a task for a differnt module. We don’t really have a middle ground in Perl 6 between Supply with it’s blocking nature and the value pumping Channel. So such a module might be actually quite useful.

But that will have to wait. I seam to have stepped on a bug in IO::Handle.read while working with large binary files. We got tons of tests on roast that deal with small data. Working with large data isn’t well tested and I wonder what monsters are lurking there.

gfldex: nqp is faster then threads

Published by gfldex on 2019-02-02T12:14:51

After heaving to much fun with a 20 year old filesystem and the inability of unix commands to hande odd filenames, I decided to replace find /somewhere -type f | xargs -P 10 -n 1 do-stuff with a Perl 6 script.

The first step is to travers a directory tree. I don’t really need to keep the a list of paths but for sure run stuff in parallel. Generating a supply in a thread seams to be a reasonable thing to do.

start my $s = supply {
for '/snapshots/home-2019-01-29/' {
emit .IO if (.IO.f & ! .IO.l);
.IO.dir()».&?BLOCK if (.IO.d & ! .IO.l);
CATCH { default { put BOLD .Str } }
}
}

{
my @files;
react whenever $s {
@files.push: $_;
}
  say +@files;
  say now - ENTER now;
}

Recursion is done with by calling the for block on the topic with .&?BLOCK. It’s very short and very slow. It takes 21.3s for 200891 files — find will do the same in 0.296s.

The OS wont be the bottleneck here, so maybe threading will help. I don’t want to overwhelm the OS with filesystem requests though. The buildin Telemetry module can tell us how many worker threads are sitting on their hands at any given time. If we use Promise to start workers by hand, we can decide to avoid threading when workers are still idle.

sub recurse(IO() $_){
my @ret;
@ret.push: .Str if (.IO.f & ! .IO.l);
  if (.IO.d & ! .IO.l) {
if Telemetry::Instrument::ThreadPool::Snap.new<gtq> > 4 {
@ret.append: do for .dir() { recurse($_) }
} else {
@ret.append: await do for .dir() {
Promise.start({ recurse($_) })
}
}
}
CATCH { default { put BOLD .Str } }
@ret.Slip
}
{
say +recurse('/snapshots/home-2019-01-29');
say now - ENTER now;
}

That takes 7.65s what is a big improvement but still miles from the performance of a 20 year old c implementation. Also find can that do the same and more on a single CPU core instead of producing a load of ~800%.

Poking around in Rakudos source, one can clearly see why. There are loads of IO::Path objects created and c-strings concatenated, just to unbox those c-strings and hand them over to some VM-opcodes. All I want are absolute paths I can call open with. We have to go deeper!

use nqp;

my @files;
my @dirs = '/snapshots/home-2019-01-29';
while @dirs.shift -> str $dir {
my Mu $dirh := nqp::opendir(nqp::unbox_s($dir));
while my str $name = nqp::nextfiledir($dirh) {
next if $name eq '.' | '..';
my str $abs-path = nqp::concat( nqp::concat($dir, '/'), $name);
next if nqp::fileislink($abs-path);
@files.push: $abs-path if nqp::stat($abs-path, nqp::const::STAT_ISREG);
@dirs.push: $abs-path if nqp::stat($abs-path, nqp::const::STAT_ISDIR);
}
CATCH { default { put BOLD .Str, ' ⟨', $dir, '⟩' } }
nqp::closedir($dirh);
}
say +@files; say now - ENTER now;

And this finishes in 2.58s with just 1 core and should play better in situations where not many filehandles are available. Still 9 times slower than find but workable. Wrapping it into a supply is a task for another day.

So for the time being — if you want fast you need nqp.

UPDATE: We need to check the currently waiting workers, not the number of spawned workers. Example changed to Snap.new<gtq>.

my Timotimo \this: These graphs are on Fire!

Published by Timo Paulssen on 2019-01-20T15:21:35

These graphs are on Fire!

These graphs are on Fire!
Photo by JERRY / Unsplash

Just as I experienced with this very blog post you're reading right now, knowing how to start may just be the hardest part of a great many things.

When you've just run your profile – which I'm planning to make easier in the future as well – and you're looking at the overview page, you're really getting an overview of the very broadest kind. How long did the program run? Did it spend a lot of time running the GC? Are many routines not optimized or jitted?

However, profiling is usually used to find the critical little piece of code that takes an extraordinary amount of time. This is where your optimization attempts should usually begin.

Until now, the overview page didn't mention any piece of code by name at all. This changed when I brought in a flame graph (well, icicle graph in this case).

Here's two screenshots to give you an idea what I'm talking about:

These graphs are on Fire!

Clicking on a box in the flame graph will expand the node to be 100% the width of the full graph so you can inspect the descendant nodes more easily. Here I've selected the step function:

These graphs are on Fire!

Selecting one of the nodes gives the name of the routine, the source code line and links to the node in the call graph explorer (the rightwards arrow button) and to the routine in the routine list. On top of that, the filename and line number below the routine name are clickable if they are from the core setting, and they take you right to the implementation file on github.

The next step is, of course, to also put flame graphs into the call graph explorer. I'm not entirely sure how to make it behave when navigating upwards to the root or downwards to the selected children, i.e. whether it should keep nodes towards the root or how many.

That's already everything for today. I couldn't invest terribly much time into moarperf this and last month, but I'll continue working :)

Have a good one, and thanks for reading!
  - Timo

brrt to the future: A short post about types and polymorphism

Published by Bart Wiegmans on 2019-01-14T21:34:00

Hi all. I usually write somewhat long-winded posts, but today I'm going to try and make an exception. Today I want to talk about the expression template language used to map the high-level MoarVM instructions to low-level constructs that the JIT compiler can easily work with:

This 'language' was designed back in 2015 subject to three constraints:
Recently I've been working on adding support for floating point operations, and  this means working on the type system of the expression language. Because floating point instructions operate on a distinct set of registers from integer instructions, a floating point operator is not interchangeable with an integer (or pointer) operator.

This type system is enforced in two ways. First, by the template compiler, which attempts to check if you've used all operands correctly. This operates during development, which is convenient. Second, by instruction selection, as there will simply not be any instructions available that have the wrong combinations of types. Unfortunately, that happens at runtime, and such errors so annoying to debug that it motivated the development of the first type checker.

However, this presents two problems. One of the advantages of the expression IR is that, by virtue of having a small number of operators, it is fairly easy to analyze. Having a distinct set of operators for each type would undo that. But more importantly, there are several MoarVM instructions that are generic, i.e. that operate on integer, floating point, and pointer values. (For example, the set, getlex and bindlex instructions are generic in this way). This makes it impossible to know whether its values will be integers, pointers, or floats.

This is no problem for the interpreter since it can treat values as bags-of-bits (i.e., it can simply copy the union MVMRegister type that holds all values of all supported types). But the expression JIT works differently - it assumes that it can place any value in a register, and that it can reorder and potentially skip storing them to memory. (This saves work when the value would soon be overwritten anyway). So we need to know what register class that is, and we need to have the correct operators to manipulate a value in the right register class.

To summarize, the problem is:
There are two ways around this, and I chose to use both. First, we know as a fact for each local or lexical value in a MoarVM frame (subroutine) what type it should have. So even a generic operator like set can be resolved to a specific type at runtime, at which point we can select the correct operators. Second, we can introduce generic operators of our own. This is possible so long as we can select the correct instruction for an operator based on the types of the operands.

For instance, the store operator takes two operands, an address and a value. Depending on the type of the value (reg or num), we can always select the correct instruction (mov or movsd). It is however not possible to select different instructions for the load operator based on the type required, because instruction selection works from the bottom up. So we need a special load_num operator, but a store_num operator is not necessary. And this is true for a lot more operators than I had initially thought. For instance, aside from the (naturally generic) do and if operators, all arithmetic operators and comparison operators are generic in this way.

I realize that, despite my best efforts, this has become a rather long-winded post anyway.....

Anyway. For the next week, I'll be taking a slight detour, and I aim to generalize the two-operand form conversion that is necessary on x86. I'll try to write a blog about it as well, and maybe it'll be short and to the point. See you later!

brrt to the future: New years post

Published by Bart Wiegmans on 2019-01-06T21:15:00

Hi everybody! I recently read jnthns Perl 6 new years resolutions post, and I realized that this was an excellent example to emulate. So here I will attempt to share what I've been doing in 2018 and what I'll be doing in 2019.

In 2018, aside from the usual refactoring, bugfixing and the like:
So 2019 starts with me trying to complete the goals specified in that grant request. I've already partially completed one goal (as explained in the interim report) - ensuring that register encoding works correctly for SSE registers in DynASM. Next up is actually ensuring support for SSE (and floating point) registers in the JIT, which is surprisingly tricky, because it means introducing a type system where there wasn't really one previously. I will have more to report on that in the near future.

After that, the first thing on my list is the support for irregular register requirements in the register allocator, which should open up the possibility of supporting various instructions.

I guess that's all for now. Speak you later!

6guts: My Perl 6 wishes for 2019

Published by jnthnwrthngtn on 2019-01-02T01:35:51

This evening, I enjoyed the New Year’s fireworks display over the beautiful Prague skyline. Well, the bit of it that was between me and the fireworks, anyway. Rather than having its fireworks display at midnight, Prague puts it at 6pm on New Year’s Day. That makes it easy for families to go to, which is rather thoughtful. It’s also, for those of us with plans to dig back into work the following day, a nice end to the festive break.

Prague fireworks over Narodni Divadlo

So, tomorrow I’ll be digging back into work, which of late has involved a lot of Perl 6. Having spent so many years working on Perl 6 compiler and runtime design and implementation, it’s been fun to spend a good amount of 2018 using Perl 6 for commercial projects. I’m hopeful that will continue into 2019. Of course, I’ll be continuing to work on plenty of Perl 6 things that are for the good of all Perl 6 users too. In this post, I’d like to share some of the things I’m hoping to work on or achieve during 2019.

Partial Escape Analysis and related optimizations in MoarVM

The MoarVM specializer learned plenty of new tricks this year, delivering some nice speedups for many Perl 6 programs. Many of my performance improvement hopes for 2019 center around escape analysis and optimizations stemming from it.

The idea is to analyze object allocations, and find pieces of the program where we can fully understand all of the references that exist to the object. The points at which we can cease to do that is where an object escapes. In the best cases, an object never escapes; in other cases, there are a number of reads and writes performed to its attributes up until its escape.

Armed with this, we can perform scalar replacement, which involves placing the attributes of the object into local registers up until the escape point, if any. As well as reducing memory operations, this means we can often prove significantly more program properties, allowing further optimization (such as getting rid of dynamic type checks). In some cases, we might never need to allocate the object at all; this should be a big win for Perl 6, which by its design creates lots of short-lived objects.

There will be various code-generation and static optimizer improvements to be done in Rakudo in support of this work also, which should result in its own set of speedups.

Expect to hear plenty about this in my posts here in the year ahead.

Decreasing startup time and base memory use

The current Rakudo startup time is quite high. I’d really like to see it fall to around half of what it currently is during 2019. I’ve got some concrete ideas on how that can be achieved, including changing the way we store and deserialize NFAs used by the parser, and perhaps also dealing with the way we currently handle method caches to have less startup impact.

Both of these should also decrease the base memory use, which is also a good bit higher than I wish.

Improving compilation times

Some folks – myself included – are developing increasingly large applications in Perl 6. For the current major project I’m working on, runtime performance is not an issue by now, but I certainly feel myself waiting a bit on compiles. Part of it is parse performance, and I’d like to look at that; in doing so, I’d also be able to speed up handling of all Perl 6 grammars.

I suspect there are some good wins to be had elsewhere in the compilation pipeline too, and the startup time improvements described above should also help, especially when we pre-compile deep dependency trees. I’d also like to look into if we can do some speculative parallel compilation.

Research into concurrency safety

In Perl 6.d, we got non-blocking await and react support, which greatly improved the scalability of Perl 6 concurrent and parallel programs. Now many thousands of outstanding tasks can be juggled across just a handful of threads (the exact number chosen according to demand and CPU count).

For Perl 6.e, which is still a good way off, I’d like to having something to offer in terms of making Perl 6 concurrent and parallel programming safer. While we have a number of higher-level constructs that eliminate various ways to make mistakes, it’s still possible to get into trouble and have races when using them.

So, I plan to spend some time this year quietly exploring and prototyping in this space. Obviously, I want something that fits in with the Perl 6 language design, and that catches real and interesting bugs – probably by making things that are liable to occasionally explode in weird ways instead reliably do so in helpful ways, such that they show up reliably in tests.

Get Cro to its 1.0 release

In the 16 months since I revealed it, Cro has become a popular choice for implementing HTTP APIs and web applications in Perl 6. It has also attracted code contributions from a couple of dozen contributors. This year, I aim to see Cro through to its 1.0 release. That will include (to save you following the roadmap link):

Comma Community, and lots of improvements and features

I founded Comma IDE in order to bring Perl 6 a powerful Integrated Development Environment. We’ve come a long way since the Minimum Viable Product we shipped back in June to the first subscribers to the Comma Supporter Program. In recent months, I’ve used Comma almost daily on my various Perl 6 projects, and by this point honestly wouldn’t want to be without it. Like Cro, I built Comma because it’s a product I wanted to use, which I think is a good place to be in when building any product.

In a few months time, we expect to start offering Comma Community and Comma Complete. The former will be free of charge, and the latter a commercial offering under a subscribe-for-updates model (just like how the supporter program has worked so far). My own Comma wishlist is lengthy enough to keep us busy for a lot more than the next year, and that’s before considering things Comma users are asking for. Expect plenty of exciting new features, as well as ongoing tweaks to make each release feel that little bit nicer to use.

Speaking, conferences, workshops, etc.

This year will see me giving my first keynote at a European Perl Conference. I’m looking forward to being in Riga again; it’s a lovely city to wander around, and I remember having some pretty nice food there too. The keynote will focus on the concurrent and parallel aspects of Perl 6; thankfully, I’ve still a good six months to figure out exactly what angle I wish to take on that, having spoken on the topic many times before!

I also plan to submit a talk or two for the German Perl Workshop, and will probably find the Swiss Perl Workshop hard to resist attending once more. And, more locally, I’d like to try and track down other Perl folks here in Prague, and see if I can help some kind of Praha.pm to happen again.

I need to keep my travel down to sensible levels, but might be able to fit in the odd other bit of speaking during the year, if it’s not too far away.

Teaching

While I want to spend most of my time building stuff rather than talking about it, I’m up for the occasional bit of teaching. I’m considering pitching a 1-day Perl 6 concurrency workshop to the Riga organizers. Then we’ll see if there’s enough folks interested in taking it. It’ll cost something, but probably much less than any other way of getting a day of teaching from me. :-)

So, down to work!

Well, a good night’s sleep first. :-) But tomorrow, another year of fun begins. I’m looking forward to it, and to working alongside many wonderful folks in the Perl community.

Perl 6 Advent Calendar: Day 25 – Calling Numbers Names

Published by uzluisfx on 2018-12-24T23:01:41

This school semester I took my first proof-based class titled “Intro to Mathematical Proof Workshop”. After having taken other math classes (Calculus, Matrix Algebra, etc.), I felt that I didn’t have that much of a mathematical foundation and up to this point, all I had been doing was purely computational mathematics sprinkled with some proofs here and there. Looking back, I found the class quite enjoyable and learning about different theorems and their proofs, mostly from number theory, has given me a new perspective of mathematics.

“How is this related to Perl 6?”, you might be asking. As I mentioned, most of the proofs that were discussed either in class or left for homework were related to number theory. If there’s one thing Perl 6 and number theory have in common is their accessibility. Similar to how the content of the elementary theory of numbers can be tangible and familiar, Perl 6 can be quite approachable to beginners. In fact, beginners are encouraged to write what’s known as “baby Perl”.

Another thing they seem to share is their vastness. For example, in Perl 6 one can find many operators while in number theory one can find a plethora of different types of numbers from even numbers to cute numbers. For most purposes, these numbers are easy to understand and if one has the definition of a number, then it’s quite easy to check if a given integer follows in that category. For example, a prime number is formally defined as follows:

An integer p > 1 is called a prime number, or simply a prime, if its only positive divisors are 1 and p. Otherwise, the integer p is termed composite.

By using this definition, we can quite simply figure out if a certain number is a prime. For example, among the first ten positive integers, 2, 3, 5 and 7 are primes. This is quite trivial for small numbers but doing it by hand with larger numbers would become tedious in no time. This is where Perl 6 comes into the picture. Perl 6 offers many constructs/features that even if they don’t make the job easy, they certainly simplify it. For instance, with the definition of a prime in mind, we could easily implement an algorithm that tests for primality in Perl 6:

sub isPrime( $number ) {
    return $number > 1 if $number ≤ 3;

    loop (my $i = 2; $i² ≤ $number; $i++) {
        return False if $number %% $i;
    }

    return True;
}

Please bear in mind that this is not about writing performant code. If the code turns down that way, then it would be excellent but it is not the goal. My aim is to showcase the easiness with which a beginner can express mathematical constructs in Perl 6. It’s worth mentioning that Perl 6 already includes the subroutine (or method) is-prime that tests for primality. However, although this is true for prime numbers, it might not be the case for another type of number you might come across such as a factorial, a factorion or even a Catalan number. And in cases like this, Perl 6 will be helpful.

After learning about different types of numbers, I set out to look for some peculiar numbers and see how I could implement them using Perl 6. In the process, I found this useful website that lists a bunch of numbers, their definitions and some examples. From all of these, I’ve chosen four types of numbers that aren’t stupidly difficult to implement (I still write baby Perl 😅) while being enough to illustrate some Perl 6 constructs. On the other hand, I’ve avoided those which might be too straightforward.

Let us start with…

Amicable numbers

Amicable numbers are pairs of numbers, also known as friendly numbers, each of whose aliquot parts add to give the other number.

sub aliquot-parts( $number ) {
   (^$number).grep: $number %% *; 
}

sub infix:<amic>( $m, $n ) {
    $m == aliquot-parts($n).sum &&
    $n == aliquot-parts($m).sum;
}

say 12 amic 28;   # False, 12 and 28 aren't amicables.
say 220 amic 284; # True, 220 and 284 are though.

A number’s aliquot parts are its factors excluding the number itself. To find the aliquot parts of a number, I’ve a created the subroutine aliquot-parts which uses 1..^$number to create a list of numbers from 1 up to $numbers (exclusive). This list is subsequently grepped to find out those numbers in the list that evenly divide $number. In this snippet it’s achieved by using the infix operator %% which returns True if a first operand is divisible by a second operand. Otherwise, it returns False. The second operand stands for any number in the list aforementioned so I’ve used *, which in this context is known as the whatever star and creates a closure over the expression $number %% *. Thus the whole expression in the subroutine is equivalent to (^$number).grep: { $number %% $_ };. At the end, the subroutine returns a list of factors of $number excluding $number itself.

To find out if two numbers are amicable, we could have used just a subroutine. However, Perl 6 allows for the creation of new operators, which are just subroutines with funny names themselves, and I’ve done just that. I created the infix operator (meaning between two operands) amic that returns True if two numbers are amicable. Otherwise, False. As you can see, the syntax to create a new operator is straightforward: the keyword sub, followed by the type of the operator (prefix, infix, postfix, etc.), the name of the operator inside quoting constructs, the expected parameters and a code block.

Factorion

A factorion is a natural number that equals the sum of the factorials of its digits in a given base.

subset Whole of Int where * ≥ 0;

sub postfix:<!>( Whole $N --> Whole ) {
    [*] 1..$N;
}

sub is-factorion( Whole $number --> Bool ) {
    $number == $number.comb.map({ Int($_)! }).sum 
}

say is-factorion(25);  # False
say is-factorion(145); # True

Recall that a factorial of a number N, which is usually denoted by N!, is the product 1 x 2 x ... x N. For example, 3! = 1 x 2 x 3 = 6. In the code snippet, I created the postfix operator ! to return the factorial of an integer operand. Thus say 3!; will work just fine in the code snippet and prints 6. How the factorial is calculated is straightforward: The range 1..$N creates a list of numbers from 1 to $N (inclusive) and then I use [...], which is known as the reduction meta-operator, with the operator * to reduce the created list to 1 x 2 x ... $N which effectively gives me the factorial of $N. There are many operators in Perl 6 and the meta-operator [...] can work with many of them.

As for the factorion, I want to know if a number is a factorion so I created a subroutine that takes an integer and returns a Boolean. Perl 6 is gradually typed so it allows to type variables explicitly, specify the return type of a sub, etc. I’ve decided to type the subroutines’ parameters and the subroutine’s return type.

In the section about the amicable numbers, I was quite liberal regarding the subroutines’ arguments. However, here I’ve decided to comply with the definition of a factorial and only allow for whole numbers, hence the definition and use of the Whole type. In Perl 6, the operator subset declares a new type using a base type. However if I hadn’t used the where clause, I’d have ended up with just another name for the Int type which would be redundant. So I used the where clause to constraint the type of any assignment to the desired input. In this case, the assignment to a variable of type Whole.

With the is-factorion sub, I used the method comb to break up $number into its digits and then use map to find their respective factorials and sum them up. The sub returns True if $number is equal to the sum of the factorials of its digits. It returns False otherwise.

Cyclic Numbers

A cyclic number is a number with N digits, which, when multiplied by 1, 2, 3, ..., N produces the same digits in a different order.

sub is-cyclic( Int $n --> Bool ) {
    for 1..$n.chars -> $d {
        return False if $n.comb.Bag != ($n * $d).comb.Bag;
    }
    return True;
}

say is-cyclic(142857); # True
say is-cyclic(95678);  # False

Here I created the subroutine is-cyclic that takes an integer and returns a Boolean value. I use a for loop to iterate over the places of the number’s digits (1st, 2nd, etc.) and use them to multiply the number in each iteration. Afterward I use the previously seen comb method followed by the Bag method. In Perl 6, a Bag is an immutable collection of distinct elements in no particular order where each element is weighted by the number of copies in the collection. This is the kind of structure I need since only the number’s digits and their amounts are important, not their order and a Bag accomplishes exactly this. The subroutine returns False if the bags don’t have the same numbers or have the same numbers but are weighted differently. Otherwise, True is returned indicating the number’s cyclic-ness.

Happy numbers

A happy number is defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits in base-ten, and repeat the process until the number either equals 1 (where it will stay), or it loops endlessly in a cycle that does not include 1.

sub is-happy( $n is copy ) {
    my $seen-numbers = :{};
    while $n > 1 {
        return False if $n ∈ $seen-numbers;
        $seen-numbers{$n} = True;
        $n = $n.comb.map(*²).sum
    }
    return True;
}

say is-happy(7);     # True
say is-happy(2018);  # False

After going through the process described in the definition, a happy number ends being equal to 1. On the other hand, a non-happy number follows a sequence that reaches the cycle 4, 16, 37, 58, 89, 145, 42, 20, 4,… which excludes 1. Armed with this fact, I created the hash $seen-numbers to keep track of such numbers. As illustrated by the while loop, the process is repeated over and over while $n is greater than 1 or until a number has been seen. Here the line that stands out is the one containing the symbol ∈. In set theory, if an element p is a member (or element) of a set A, then it’s denoted by p ∈ A and this exactly what’s being tested here. If the number $n is an element of the hash, then the sub returns False. Otherwise, it returns True which indicates the number’s happiness.

Summary

In this post, I slightly went over gradual typing, how to define a new operator, sub-classing by using the subset keyword and the set and bag data structures. As you may have realized, Perl 6 offers many constructs that facilitate many different tasks. In this instance, it was my desire to express definitions of numbers in a more programmatic way. Yours could be totally different but you can rest assured that Perl 6 is there to make the job easier and immensely more enjoyable.

Well…that’s all folks! Happy Christmas and a wonder-ofun New Year!

Perl 6 Inside Out: 🎄 26/25. Overview of the Perl 6 One-Liner Advent Calendar 2018

Published by andrewshitov on 2018-12-24T20:27:55

The Perl 6 One-Liner Advent Calendar 2018 is over! Let’s make a quick overview of what we have covered so far. There were a few themes covered.

First, some one-liners from the Perl 6 Calendar 2019 were explained in more detail. We looked at how to generate random passwords and random integers, how to print current date, and at how good Perl 6 is doing with Unicode.

Second, a number of problems from Project Euler were solved in Perl 6: #1 grepping dividable numbers, #2 adding up even Fibonacci numbers, #4 testing palindromic numbers, #5 finding the least common divider, #7 printing the given Fibonacci number, #13 computing a sum of big numbers, #19 counting Sundays and counting them differently, #25 finding a Fibonacci number of the given length, and #34 finding a sum of the numbers that are equal to the sum of factorials of their digits.

Third, we looked at some isolated elements of Perl 6 syntax, such as meta-operator X, range and sequence operators, reduction operator, or how rational numbers work in Perl 6 and how to use complex number in geometry. A numerous times, we were using the built-in routines map and grep, and the WhateverCode objects.

Fourth, we explored a few common sequences: Fibonacci numbers and prime numbers, or the Leibnitz series for computing the value of π.

Fifth, we solved a few golf problems: how to print the first Fibonacci numbers, or how to print the first prime numbers. A separate post was dedicated to the ideas of how to make the code more compact.

Sixth, we moved on to command-line tools, covered the basic options that Rakudo supports and created a few one-liners for manipulating files, such as the one for renaming a bunch of files, or reversing a text file, or merging two files horizontally, or computing totals from the table columns, or how to read from multiple input files.

As a bonus, the posts from the One-Liner calendar have been translated to Chines, thanks to Chen Yf (if I decoded the name correctly).

Also, don’t forget about my article in the regular Perl 6 Advent Calendar about how to make the grammar more compact.

Perl 6 Inside Out: 🎄 25/25. Tips and ideas for the Perl 6 Golf code

Published by andrewshitov on 2018-12-24T20:20:20

Welcome to Day 25, the last day of the Perl 6 One-Liner Advent Calendar! Traditional advent calendars have only 24 entries, and our bonus post today will be dedicated to some tips and tricks that you can use in Perl 6 golf contest.

There is a great site, code-golf.io, where you can try solving a number of problems, and move Perl 6 to the top scores. I suspect that many problems can benefit from the techniques that were covered in the previous days of this One-Liner Advent Calendar.

Omitting topic variable

If methods are called on the topic variable $_, then the name of the variable is not really needed for Perl 6 to understand what you are talking about, so, avoid explicitly naming the topic variable:

$_.say for 1..10

Using ranges for making loops

Ranges in Perl are great things to express loop details: in a few characters, you specify both the initial and final state of the loop variable. Postfix forms are usually shorter.

for 1..10 {.say}
.say for 1..10

Think if you can count from 0, in which case, a caret character can be used to get a range starting from 0. The following code prints the numbers 0 to 9:

.say for ^10

Choosing between a range and a sequence

In loops, sequences can work exactly the same as a range would do. The choice may depend on whether the Golf software counts bytes or Unicode characters. In the first case, the two dots of a range are preferable over the three dots of a range. In the second case, use a Unicode character:

.say for 1..10
.say for 1...10
.say for 1…10

When you need to count downwards, sequences are your friends, as they can deduce the direction of changing the loop counter:

.say for 10…1

Using map instead of a loop

In some cases, especially when you have to make more than one action with the loop variable, try using map to iterate over all the values:

(^10).map: *.say

Omitting parentheses

Unlike Perl 5, Perl 6 does not force you to use parentheses in condition checks in the regular form:

if ($x > 0) {say $x;exit}
if $x > 0 {say $x;exit}

Sometimes, you will want to omit parentheses in function calls, too.

Neither you need parentheses when declaring arrays or hashes. With arrays, use the quoting construct on top of that:

my @a = ('alpha', 'beta')
my @b=<alpha beta>

Using chained comparisons

Another interesting feature is using more than one condition in a single expression:

 say $z if $x < 10 < $y

Choosing between methods and functions

In many cases, you can choose between calling a function and using a method. Method calls can be additionally chained after each other, so you can save a lot of parentheses or spaces:

(^10).map({.sin}).grep: *>0 

When there exist both a method and a stand-alone function, method call is often shorter or at least the same length if you omit parentheses.

abs($x)
abs $x
$x.abs

Using Unicode characters

Perl 6 operators often have Unicode equivalents, where you can express a wordy construct with a single character. Compare:

if $x=~=$y
if $x≅$y

Built-in constants are also available in the Unicode space, for example, pi vs π, or Inf vs .

There are many numbers, both small and big, that can be replaced with a single Unicode symbol: 1/3 vs , or 20 vs , or 100 vs .

Using superscripts

Superscripts are great for calculating powers. Compare:

say $x**2
$x².say

Using \ to make sigilless variables

Don’t forget about the the following way of binding containers and creating a kind of a sigilless variable:

my \a=42;say a

Using default parameters

When you are working with functions or class methods, check if there are default values in their signatures. Also check if there is an alternative variant with positional arguments. Compare, for example, three ways of creating a date object.

Date.new(year=>2019,month=>1,day=>1)
Date.new(year=>2019)
Date.new(2019,1,1)

Using && instead of if

Boolean expressions can save a few characters, as Perl will not calculate the second condition if the first gives the result already. For example:

.say if $x>0   
$x>0&&.say

Choosing between put vs say

Finally, sometimes it is better to use put instead of say. In some cases, you will be free from parentheses in the output when printing arrays, for example. In some other cases you will get all values instead of concise output when working with ranges, for example:

> say 1..10
1..10
> put 1..10
1 2 3 4 5 6 7 8 9 10

Till next year!

You can also find many interesting ideas in the last year’s advent post by Aleks-Daniel Jakimenko-Aleksejev.

But this time, this Perl 6 One-Line Advent Calendar is completely over. There will be one more post with an overview of everything published in the last 25 days.

I wish you all the best with your further Perl 6 adventure, would it be one-liners or industrial-scale applications. See you next year in another advent calendar, but don’t forget that perl6.online continues its work, and more posts will be published during the next 2019 year!

Perl 6 Advent Calendar: Day 24 – Topic Modeling with Perl 6

Published by titsuki on 2018-12-24T00:01:47

Hi, everyone.

Today, let me introduce Algorithm::LDA.
This module is a Latent Dirichlet Allocation (i.e., LDA) implementation for topic modeling.

Introduction

What’s LDA? LDA is one of the popular unsupervised machine learning methods.
It models document generation process and represents each document as a mixture of topics.

So, what does “a mixture of topics” mean? Fig. 1 shows an article in which some of the words are highlighted in three colors: yellow, pink, and blue. Words about genetics are marked in yellow; words about evolutionary biology are marked in pink; words about data analysis are marked in blue. Imagine all of the words in this article are colored, then we can represent this article as a mixture of topics (i.e., colors).

Fig. 1:
Fig. 1
(This image is from “Probabilistic topic models.” (David Blei 2012))

OK, then I’ll demonstrate how to use Algorithm::LDA in the next section.

Modeling Quotations

In this article, we explore Wikiquote. Wikiquote is a cloud-sourced platform providing sourced quotations.
By using Wikiquote API, we get quotations that are used for LDA estimation. After that, we execute LDA and plot the result.
Finally, we create an information retrieval application using the resulting model.

Preliminary

Wikiquote API

Wikiquote has action API that provides means for getting Wikiquote resources.
For example, you can get content of the Main Page as follows:

$ curl "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&titles=Main%20Page&rvprop=content&format=json"

The result of the above command is:

{"batchcomplete":"","warnings":{"main":{"*":"Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes. Use [[Special:ApiFeatureUsage]] to see usage of deprecated features by your application."},"revisions":{"*":"Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."}},"query":{"pages":{"1":{"pageid":1,"ns":0,"title":"Main Page","revisions":[{"contentformat":"text/x-wiki","contentmodel":"wikitext","*":"
\n
{{Main page header}}
\n
{{Main Page Quote of the day}}
\n</div>\n\n
\n{{Main Page Selected pages}}\n{{Main categories}}\n
\n\n
\n{{New pages}}\n{{Main Page Community}}\n
\n\n
\n==Wikiquote's sister projects==\n{{otherwiki}}\n\n==Wikiquote languages==\n{{Wikiquotelang}}\n
\n__NOTOC__ __NOEDITSECTION__\n{{noexternallanglinks:ang|simple}}\n[[Category:Main page]]"}]}}}}

WWW

WWW by Zoffix Znet is a library which provides easy-to-use API for fetching and parsing json very simply.
For instance, as the README says, you can easily get content by jget(URL)<HASHKEY> style:

say jget('https://httpbin.org/get?foo=42&bar=x')<args><foo>;

To install WWW:

$ zef install WWW

Chart::Gnuplot

Chart::Gnuplot by titsuki is a bindings for gnuplot.

To install Chart::Gnuplot:

$ zef install Chart::Gnuplot

In this article, we use this module; however, if you unfamiliar with gnuplot there are many alternatives: SVG::Plot, Graphics::PLplot, Call matplotlib functions via Inline::Python.

Stopwords from NLTK

NLTK is a toolkit for natural language processing.
Not only APIs, it also provides corpus.
You can get stopwords for English via “70. Stopwords Corpus”: http://www.nltk.org/nltk_data/

Exercise 1: Get Quotations and Create Cleaned Documents

At the beginning, we have to get quotations from Wikiquote and create clean documents.

The main goal of this section is to create documents according to the following format:

<docid> <personid> <word> <word> <word> ...
<docid> <personid> <word> <word> <word> ...
<docid> <personid> <word> <word> <word> ...

The whole source code is:

use v6.c;
use WWW;
use URI::Escape;

sub get-members-from-category(Str $category --> List) {
    my $member-url = "https://en.wikiquote.org/w/api.php?action=query&list=categorymembers&cmtitle={$category}&cmlimit=100&format=json";
    @(jget($member-url)<query><categorymembers>.map(*<title>));
}

sub get-pages(Str @members, Int $batch = 50 --> List) {
    my Int $start = 0;
    my @pages;
    while $start < @members {
        my $list = @members[$start..^List($start + $batch, +@members).min].map({ uri_escape($_) }).join('%7C');
        my $url = "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&formatversion=2&titles={$list}";
        @pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });
        $start += $batch;
    }
    @pages;
}

sub create-documents-from-pages(@pages, @members --> List) {
    my @documents;
    for @pages -> $page {
        my @quotations = $page<body>.split("\n")\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\|$<link>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst("[", "[", :g))\
        .map(*.subst("]", "]", :g))\
        .map(*.subst("&amp;", "&", :g))\
        .map(*.subst("&nbsp;", "", :g))\
        .map(*.subst(/:i [ \<\/?\s?br\> | \<br\s?\/?\> ]/, " ", :g))\
        .grep(/^\*<-[*]>/)\
        .map(*.subst(/^\*\s+/, ""));

        # Note: The order of array wikiquote API returned is agnostic.
        my Int $index = @members.pairs.grep({ .value eq $page<title> }).map(*.key).head;
        @documents.push(%(body => $_, personid => $index)) for @quotations;
    }
    @documents.sort({ $^a<personid> <=> $^b<personid> }).pairs.map({ %(docid => .key, personid => .value<personid>, body => .value<body>) }).list
}

my Str @members = get-members-from-category("Category:1954_births");
my @pages = get-pages(@members);
my @documents = create-documents-from-pages(@pages, @members);

my $docfh = open "documents.txt", :w;
$docfh.say((.<docid>, .<personid>, .<body>).join(" ")) for @documents;
$docfh.close;

my $memfh = open "members.txt", :w;
$memfh.say($_) for @members;
$memfh.close;

First, we get the members listed in the “Category:1954_births” page. I choosed the year that the Perl 6 designer was born in:

my Str @members = get-members-from-category("Category:1954_births");

where get-members-from-category gets members via Wikiquote API:

sub get-members-from-category(Str $category --> List) {
    my $member-url = "https://en.wikiquote.org/w/api.php?action=query&list=categorymembers&cmtitle={$category}&cmlimit=100&format=json";
    @(jget($member-url)<query><categorymembers>.map(*<title>));
}

Next, call get-pages:

my @pages = get-pages(@members);

get-pages is a subroutine that gets pages of the given titles (i.e., members):

sub get-pages(Str @members, Int $batch = 50 --> List) {
    my Int $start = 0;
    my @pages;
    while $start < @members {
        my $list = @members[$start..^List($start + $batch, +@members).min].map({ uri_escape($_) }).join('%7C');
        my $url = "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&formatversion=2&titles={$list}";
        @pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });
        $start += $batch;
    }
    @pages;
}

where @members[$start..^List($start + $batch, +@members).min] is a slice of length $batch, and the elements of the slice are percent encoded by uri_escase and joint by %7C (i.e., percent encoded pipe symbol).
In this case, one of the resulting $list is:

Mumia%20Abu-Jamal%7CRene%20Balcer%7CIain%20Banks%7CGerard%20Batten%7CChristie%20Brinkley%7CJames%20Cameron%20%28director%29%7CEugene%20Chadbourne%7CJackie%20Chan%7CChang%20Yu-hern%7CLee%20Child%7CHugo%20Ch%C3%A1vez%7CDon%20Coscarelli%7CElvis%20Costello%7CDaayiee%20Abdullah%7CThomas%20H.%20Davenport%7CGerardine%20DeSanctis%7CAl%20Di%20Meola%7CKevin%20Dockery%20%28author%29%7CJohn%20Doe%20%28musician%29%7CF.%20J.%20Duarte%7CIain%20Duncan%20Smith%7CHerm%20Edwards%7CAbdel%20Fattah%20el-Sisi%7CRob%20Enderle%7CRecep%20Tayyip%20Erdo%C4%9Fan%7CAlejandro%20Pe%C3%B1a%20Esclusa%7CHarvey%20Fierstein%7CCarly%20Fiorina%7CGary%20L.%20Francione%7CAshrita%20Furman%7CMary%20Gaitskill%7CGeorge%20Galloway%7C%C5%BDeljko%20Glasnovi%C4%87%7CGary%20Hamel%7CFran%C3%A7ois%20Hollande%7CKazuo%20Ishiguro%7CJean-Claude%20Juncker%7CAnish%20Kapoor%7CGuy%20Kawasaki%7CRobert%20Francis%20Kennedy%2C%20Jr.%7CLawrence%20M.%20Krauss%7CAnatoly%20Kudryavitsky%7CAnne%20Lamott%7CJoep%20Lange%7CAng%20Lee%7CLi%20Bin%7CRay%20Liotta%7CPeter%20Lipton%7CJames%20D.%20Macdonald%7CKen%20MacLeod

Note that get-pages subroutine uses hash contextualizer %() for creating a sequence of hash:

@pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });

After that, we call create-documents-from-pages:

my @documents = create-documents-from-pages(@pages, @members);

create-documents-from-pages creates documents from each page:

sub create-documents-from-pages(@pages, @members --> List) {
    my @documents;
    for @pages -> $page {
        my @quotations = $page<body>.split("\n")\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\|$<link>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst("[", "[", :g))\
        .map(*.subst("]", "]", :g))\
        .map(*.subst("&amp;", "&", :g))\
        .map(*.subst("&nbsp;", "", :g))\
        .map(*.subst(/:i [ \<\/?\s?br\> | \<br\s?\/?\> ]/, " ", :g))\
        .grep(/^\*<-[*]>/)\
        .map(*.subst(/^\*\s+/, ""));

        # Note: The order of array wikiquote API returned is agnostic.
        my Int $index = @members.pairs.grep({ .value eq $page<title> }).map(*.key).head;
        @documents.push(%(body => $_, personid => $index)) for @quotations;
    }
    @documents.sort({ $^a<personid> <=> $^b<personid> }).pairs.map({ %(docid => .key, personid => .value<personid>, body => .value<body>) }).list
}

where .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\|$<link>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g)) and .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g)) are coverting commands that extract texts for displaying and delete texts for internal linking from anchor texts. For example, [[Perl]] is reduced into Perl. For more syntax info, see: https://docs.perl6.org/language/regexes#Named_captures or https://docs.perl6.org/routine/subst

After some cleaning operations (.e.g., .map(*.subst("[", "[", :g))), we extract quotation lines.
.grep(/^\*<-[*]>/) finds lines starting with single asterisk because most of the quotations appear in such kind of lines.

Next, .map(*.subst(/^\*\s+/, "")) deletes each asterisk since asterisk itself isn’t a constituent of each quotation.

Finally, we save the documents and members (i.e., titles):

my $docfh = open "documents.txt", :w;
$docfh.say((.<docid>, .<personid>, .<body>).join(" ")) for @documents;
$docfh.close;

my $memfh = open "members.txt", :w;
$memfh.say($_) for @members;
$memfh.close;

Exercise 2: Execute LDA and Visualize the Result

In the previous section, we saved the cleaned documents.
In this section, we use the documents for LDA estimation and visualize the result.
The goal of this section is to plot a document-topic distribution and write a topic-word table.

The whole source code is:

use v6.c;
use Algorithm::LDA;
use Algorithm::LDA::Formatter;
use Algorithm::LDA::LDAModel;
use Chart::Gnuplot;
use Chart::Gnuplot::Subset;

sub create-model(@documents --> Algorithm::LDA::LDAModel) {
    my $stopwords = "stopwords/english".IO.lines.Set;
    my &tokenizer = -> $line { $line.words.map(*.lc).grep(-> $w { ($stopwords !(cont) $w) and $w !~~ /^[ <:S> | <:P> ]+$/ }) };
    my ($documents, $vocabs) = Algorithm::LDA::Formatter.from-plain(@documents.map({ my ($, $, *@body) = .words; @body.join(" ") }), &tokenizer);
    my Algorithm::LDA $lda .= new(:$documents, :$vocabs);
    my Algorithm::LDA::LDAModel $model = $lda.fit(:num-topics(10), :num-iterations(500), :seed(2018));
    $model
}

sub plot-topic-distribution($model, @members, @documents, $search-regex = rx/Larry/) {
    my $target-personid = @members.pairs.grep({ .value ~~ $search-regex }).map(*.key).head;
    my $docid = @documents.map({ my ($docid, $personid, *@body) = .words; %(docid => $docid, personid => $personid, body => @body.join(" ")) })\
    .grep({ .<personid> == $target-personid and .<body> ~~ /:i << perl >>/}).map(*<docid>).head;

    note("@documents[$docid] is selected");
    my ($row-size, $col-size) = $model.document-topic-matrix.shape;
    my @doc-topic = gather for ($docid X ^$col-size) -> ($i, $j) { take $model.document-topic-matrix[$i;$j]; }
    my Chart::Gnuplot $gnu .= new(:terminal("png"), :filename("topics.png"));
    $gnu.command("set boxwidth 0.5 relative");
    my AnyTicsTic @tics = @doc-topic.pairs.map({ %(:label(.key), :pos(.key)) });
    $gnu.legend(:off);
    $gnu.xlabel(:label("Topic"));
    $gnu.ylabel(:label("P(z|theta,d)"));
    $gnu.xtics(:tics(@tics));
    $gnu.plot(:vertices(@doc-topic.pairs.map({ @(.key, .value.exp) })), :style("boxes"), :fill("solid"));
    $gnu.dispose;
}

sub write-nbest($model) {
  my $topics := $model.nbest-words-per-topic(10);
  for ^(10/5) -> $part-i {
    say "|" ~ (^5).map(-> $t { "topic { $part-i * 5 + $t }" }).join("|") ~ "|";
    say "|" ~ (^5).map({ "----" }).join("|") ~ "|";
    for ^10 -> $rank {
        say "|" ~ gather for ($part-i * 5)..^($part-i * 5 + 5) -> $topic {
            take @($topics)[$topic;$rank].key;
        }.join("|") ~ "|";
    }
    "".say;
  }
}

sub save-model($model) {
  my @document-topic-matrix := $model.document-topic-matrix;
  my ($document-size, $topic-size) = @document-topic-matrix.shape;
  my $doctopicfh = open "document-topic.txt", :w;

  $doctopicfh.say: ($document-size, $topic-size).join(" ");
  for ^$document-size -> $doc-i {
    $doctopicfh.say: gather for ^$topic-size -> $topic { take @document-topic-matrix[$doc-i;$topic] }.join(" ");
  }
  $doctopicfh.close;

  my @topic-word-matrix := $model.topic-word-matrix;
  my ($, $word-size) = @topic-word-matrix.shape;
  my $topicwordfh = open "topic-word.txt", :w;

  $topicwordfh.say: ($topic-size, $word-size).join(" ");
  for ^$topic-size -> $topic-i {
    $topicwordfh.say: gather for ^$word-size -> $word { take @topic-word-matrix[$topic-i;$word] }.join(" ");
  }
  $topicwordfh.close;

  my @vocabulary := $model.vocabulary;
  my $vocabfh = open "vocabulary.txt", :w;

  $vocabfh.say($_) for @vocabulary;
  $vocabfh.close;
}

my @documents = "documents.txt".IO.lines;
my $model = create-model(@documents);
my @members = "members.txt".IO.lines;
plot-topic-distribution($model, @members, @documents);
write-nbest($model);
save-model($model);

First, we load the cleaned documents and call create-model:

my @documents = "documents.txt".IO.lines;
my $model = create-model(@documents);

create-model creates a LDA model by loading given documents:

sub create-model(@documents --> Algorithm::LDA::LDAModel) {
    my $stopwords = "stopwords/english".IO.lines.Set;
    my &tokenizer = -> $line { $line.words.map(*.lc).grep(-> $w { ($stopwords !(cont) $w) and $w !~~ /^[ <:S> | <:P> ]+$/ }) };
    my ($documents, $vocabs) = Algorithm::LDA::Formatter.from-plain(@documents.map({ my ($, $, *@body) = .words; @body.join(" ") }), &tokenizer);
    my Algorithm::LDA $lda .= new(:$documents, :$vocabs);
    my Algorithm::LDA::LDAModel $model = $lda.fit(:num-topics(10), :num-iterations(500), :seed(2018));
    $model
}

where $stopwords is a set of English stopwords from NLTK (I mentioned preliminary section), and &tokenizer is a custom tokenizer for Algorithm::LDA::Formatter.from-plain. The tokenizer converts given sentence as follows:

Algorithm::LDA::Formatter.from-plain creates numerical native documents (i.e., each word in a document is mapped to its corresponding vocabulary id, and this id is represented by C int32) and vocabulary from a list of texts.

After creating an Algorithm::LDA instance using the above numerical documents, we can start LDA estimation by Algorithm::LDA.fit. In this example, we set the number of topics to 10, and the number of iterations to 100, and the seed for srand to 2018.

Next, we plot a document-topic distribution. Before this plotting, we load the saved members:

my @members = "members.txt".IO.lines;
plot-topic-distribution($model, @members, @documents);

plot-topic-distribution plots topic distribution with Chart::Gnuplot:

sub plot-topic-distribution($model, @members, @documents, $search-regex = rx/Larry/) {
    my $target-personid = @members.pairs.grep({ .value ~~ $search-regex }).map(*.key).head;
    my $docid = @documents.map({ my ($docid, $personid, *@body) = .words; %(docid => $docid, personid => $personid, body => @body.join(" ")) })\
    .grep({ .<personid> == $target-personid and .<body> ~~ /:i << perl >>/}).map(*<docid>).head;

    note("@documents[$docid] is selected");
    my ($row-size, $col-size) = $model.document-topic-matrix.shape;
    my @doc-topic = gather for ($docid X ^$col-size) -> ($i, $j) { take $model.document-topic-matrix[$i;$j]; }
    my Chart::Gnuplot $gnu .= new(:terminal("png"), :filename("topics.png"));
    $gnu.command("set boxwidth 0.5 relative");
    my AnyTicsTic @tics = @doc-topic.pairs.map({ %(:label(.key), :pos(.key)) });
    $gnu.legend(:off);
    $gnu.xlabel(:label("Topic"));
    $gnu.ylabel(:label("P(z|theta,d)"));
    $gnu.xtics(:tics(@tics));
    $gnu.plot(:vertices(@doc-topic.pairs.map({ @(.key, .value.exp) })), :style("boxes"), :fill("solid"));
    $gnu.dispose;
}

In this example, we plot topic distribution of a Larry Wall’s quotation (“Although the Perl Slogan is There’s More Than One Way to Do It, I hesitate to make 10 ways to do something.”):
"Although the Perl Slogan is There's More Than One Way to Do It, I hesitate to make 10 ways to do something."

After the plotting, we call write-nbest:

write-nbest($model);

In LDA, what topic XXX represents is expressed as a list of words. write-nbest writes a markdown style topic-word distribution table:

sub write-nbest($model) {
  my $topics := $model.nbest-words-per-topic(10);
  for ^(10/5) -> $part-i {
    say "|" ~ (^5).map(-> $t { "topic { $part-i * 5 + $t }" }).join("|") ~ "|";
    say "|" ~ (^5).map({ "----" }).join("|") ~ "|";
    for ^10 -> $rank {
        say "|" ~ gather for ($part-i * 5)..^($part-i * 5 + 5) -> $topic {
            take @($topics)[$topic;$rank].key;
        }.join("|") ~ "|";
    }
    "".say;
  }
}

The result is:

topic 0 topic 1 topic 2 topic 3 topic 4
would scotland black could one
it’s country mr. first work
believe one lot law new
one political play college human
took world official basic process
much need new speak business
don’t must reacher language becomes
ever national five every good
far many car matter world
fighting us road right knowledge
topic 5 topic 6 topic 7 topic 8 topic 9
apple united people like */
likely war would one die
company states i’m something und
jobs years know think quantum
even would think way play
steve american want things noble
life president get perl home
like human going long dog
end must say always student
small us go really ist

As you can see, the quotation of “Although the Perl Slogan is There’s More Than One Way to Do It, I hesitate to make 10 ways to do something.” contains “one”, “way” and “perl”. This is the reason why this quotation is mainly composed of topic 8.

For the next section, we save the model by save-model subroutine:

sub save-model($model) {
  my @document-topic-matrix := $model.document-topic-matrix;
  my ($document-size, $topic-size) = @document-topic-matrix.shape;
  my $doctopicfh = open "document-topic.txt", :w;

  $doctopicfh.say: ($document-size, $topic-size).join(" ");
  for ^$document-size -> $doc-i {
    $doctopicfh.say: gather for ^$topic-size -> $topic { take @document-topic-matrix[$doc-i;$topic] }.join(" ");
  }
  $doctopicfh.close;

  my @topic-word-matrix := $model.topic-word-matrix;
  my ($, $word-size) = @topic-word-matrix.shape;
  my $topicwordfh = open "topic-word.txt", :w;

  $topicwordfh.say: ($topic-size, $word-size).join(" ");
  for ^$topic-size -> $topic-i {
    $topicwordfh.say: gather for ^$word-size -> $word { take @topic-word-matrix[$topic-i;$word] }.join(" ");
  }
  $topicwordfh.close;

  my @vocabulary := $model.vocabulary;
  my $vocabfh = open "vocabulary.txt", :w;

  $vocabfh.say($_) for @vocabulary;
  $vocabfh.close;
}

Exercise 3: Create Quotation Search Engine

In this section, we create a quotation search engine which uses the model created in the previous section.
More specifically, we create LDA-based document model (Xing Wei and W. Bruce Croft 2006) and make a CLI tool that can search quotations. (Note that the words “token” and “word” are interchangable in this section)

The whole source code is:

use v6.c;

sub MAIN(Str :$query!) {
    my \doc-topic-iter = "document-topic.txt".IO.lines.iterator;
    my \topic-word-iter = "topic-word.txt".IO.lines.iterator;
    my ($document-size, $topic-size) = doc-topic-iter.pull-one.words;
    my ($, $word-size) = topic-word-iter.pull-one.words;

    my Num @document-topic[$document-size;$topic-size];
    my Num @topic-word[$topic-size;$word-size];

    for ^$document-size -> $doc-i {
        my \maybe-line := doc-topic-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @document-topic[$doc-i;$_] = @line[$_];
        }
    }

    for ^$topic-size -> $topic-i {
        my \maybe-line := topic-word-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @topic-word[$topic-i;$_] = @line[$_];
        }
    }

    my %vocabulary = "vocabulary.txt".IO.lines.pairs>>.antipair.hash;
    my @members = "members.txt".IO.lines;
    my @documents = "documents.txt".IO.lines;
    my @docbodies = @documents.map({ my ($, $, *@body) = .words; @body.join(" ") });
    my %doc-to-person = @documents.map({ my ($docid, $personid, $) = .words; %($docid => $personid) }).hash;
    my @query = $query.words.map(*.lc);

    my @sorted-list = gather for ^$document-size -> $doc-i {
        my Num $log-prob = gather for @query -> $token {
            my Num $log-ml-prob = Pml(@docbodies, $doc-i, $token);
            my Num $log-lda-prob = Plda($token, $topic-size, $doc-i, %vocabulary, @document-topic, @topic-word);
            take log-sum(log(0.2) + $log-ml-prob, log(0.8) + $log-lda-prob);
        }.sum;
        take %(doc-i => $doc-i, log-prob => $log-prob);
    }.sort({ $^b<log-prob> <=> $^a<log-prob> });

    for ^10 {
        my $docid = @sorted-list[$_]<doc-i>;
        sprintf("\"%s\" by %s %f", @docbodies[$docid], @members[%doc-to-person{$docid}], @sorted-list[$_]<log-prob>).say;
    }

}

sub Pml(@docbodies, $doc-i, $token --> Num) {
    my Int $num-tokens = @docbodies[$doc-i].words.grep({ /:i^ $token $/ }).elems;
    my Int $total-tokens = @docbodies[$doc-i].words.elems;
    return -100e0 if $total-tokens == 0 or $num-tokens == 0;
    log($num-tokens) - log($total-tokens);
}

sub Plda($token, $topic-size, $doc-i, %vocabulary is raw, @document-topic is raw, @topic-word is raw --> Num) {
    gather for ^$topic-size -> $topic {
        if %vocabulary{$token}:exists {
            take @document-topic[$doc-i;$topic] + @topic-word[$topic;%vocabulary{$token}];
        } else {
            take -100e0;
        }
    }.reduce(&log-sum);
}

sub log-sum(Num $log-a, Num $log-b --> Num) {
    if $log-a < $log-b {
        return $log-b + log(1 + exp($log-a - $log-b))
    } else {
        return $log-a + log(1 + exp($log-b - $log-a))
    }
}

At the beginning, we load the saved model and prepare @document-topic, @topic-word, %vocabulary, @documents, @docbodies, %doc-to-person and @members:

    my \doc-topic-iter = "document-topic.txt".IO.lines.iterator;
    my \topic-word-iter = "topic-word.txt".IO.lines.iterator;
    my ($document-size, $topic-size) = doc-topic-iter.pull-one.words;
    my ($, $word-size) = topic-word-iter.pull-one.words;

    my Num @document-topic[$document-size;$topic-size];
    my Num @topic-word[$topic-size;$word-size];

    for ^$document-size -> $doc-i {
        my \maybe-line = doc-topic-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @document-topic[$doc-i;$_] = @line[$_];
        }
    }

    for ^$topic-size -> $topic-i {
        my \maybe-line = topic-word-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @topic-word[$topic-i;$_] = @line[$_];
        }
    }

    my %vocabulary = "vocabulary.txt".IO.lines.pairs>>.antipair.hash;
    my @members = "members.txt".IO.lines;
    my @documents = "documents.txt".IO.lines;
    my @docbodies = @documents.map({ my ($, $, *@body) = .words; @body.join(" ") });
    my %doc-to-person = @documents.map({ my ($docid, $personid, $) = .words; %($docid => $personid) }).hash;

Next, we set @query using option :$query:

my @query = $query.words.map(*.lc);

After that, we compute the probability of P(query|document) based on Eq. 9 of the aforementioned paper (Note that we use logarithm to avoid undeflow and set the parameter mu to zero) and sort them.

    my @sorted-list = gather for ^$document-size -> $doc-i {
        my Num $log-prob = gather for @query -> $token {
            my Num $log-ml-prob = Pml(@docbodies, $doc-i, $token);
            my Num $log-lda-prob = Plda($token, $topic-size, $doc-i, %vocabulary, @document-topic, @topic-word);
            take log-sum(log(0.2) + $log-ml-prob, log(0.8) + $log-lda-prob);
        }.sum;
        take %(doc-i => $doc-i, log-prob => $log-prob);
    }.sort({ $^b<log-prob> <=> $^a<log-prob> });

Plda adds logarithmic topic given document probability (i.e., lnP(topic|theta,document)) and word given topic probability (i.e., lnP(word|phi,topic)) for each topic, and sums them by .reduce(&log-sum);:

sub Plda($token, $topic-size, $doc-i, %vocabulary is raw, @document-topic is raw, @topic-word is raw --> Num) {
    gather for ^$topic-size -> $topic {
        if %vocabulary{$token}:exists {
            take @document-topic[$doc-i;$topic] + @topic-word[$topic;%vocabulary{$token}];
        } else {
            take -100e0;
        }
    }.reduce(&log-sum);
}

and Pml (ml means Maximum Likelihood) counts $token and normalizes it by the number of the total tokens in the document (Note that this computation is also conducted in log space):

sub Pml(@docbodies, $doc-i, $token --> Num) {
    my Int $num-tokens = @docbodies[$doc-i].words.grep({ /:i^ $token $/ }).elems;
    my Int $total-tokens = @docbodies[$doc-i].words.elems;
    return -100e0 if $total-tokens == 0 or $num-tokens == 0;
    log($num-tokens) - log($total-tokens);
}

OK, then let’s execute!

query “perl”:

$ perl6 search-quotation.p6 --query="perl"
"Perl will always provide the null." by Larry Wall -3.301156
"Perl programming is an *empirical* science!" by Larry Wall -3.345189
"The whole intent of Perl 5's module system was to encourage the growth of Perl culture rather than the Perl core." by Larry Wall -3.490238
"I dunno, I dream in Perl sometimes..." by Larry Wall -3.491790
"At many levels, Perl is a 'diagonal' language." by Larry Wall -3.575779
"Almost nothing in Perl serves a single purpose." by Larry Wall -3.589218
"Perl has a long tradition of working around compilers." by Larry Wall -3.674111
"As for whether Perl 6 will replace Perl 5, yeah, probably, in about 40 years or so." by Larry Wall -3.684454
"Well, I think Perl should run faster than C." by Larry Wall -3.771155
"It's certainly easy to calculate the average attendance for Perl conferences." by Larry Wall -3.864075

query “apple”:

$ perl6 search-quotation.p6 --query="apple"
"Steve Jobs is the"With phones moving to technologies such as Apple Pay, an unwillingness to assure security could create a Target-like exposure that wipes Apple out of the market." by Rob Enderle -3.841538
"*:From Joint Apple / HP press release dated 1 January 2004 available [http://www.apple.com/pr/library/2004/jan/08hp.html here]." by Carly Fiorina -3.904489
"Samsung did to Apple what Apple did to Microsoft, skewering its devoted users and reputation, only better. ... There is a way for Apple to fight back, but the company no longer has that skill, and apparently doesn't know where to get it, either." by Rob Enderle -3.940359
"[W]hen it came to the iWatch, also a name that Apple didn't own, Apple walked away from it and instead launched the Apple Watch. Certainly, no risk of litigation, but the product's sales are a fraction of what they otherwise might have been with the proper name and branding." by Rob Enderle -4.152145
"[W]hen Apple wanted the name "iPhone" and it was owned by Cisco, Steve Jobs just took it, and his legal team executed so he could keep it. It turned out that doing this was surprisingly inexpensive. And, as the Apple Watch showcased, the Apple Phone likely would not have sold anywhere near as well as the iPhone." by Rob Enderle -4.187223
"The cause of [Apple v. Qualcomm] appears to be an effort by Apple to pressure Qualcomm into providing a unique discount, largely because Apple has run into an innovation wall, is under increased competition from firms like Samsung, and has moved to a massive cost reduction strategy. (I've never known this to end well, as it causes suppliers to create unreliable components and outright fail.)" by Rob Enderle -4.318575
"Apple tends to aggressively work to not discover problems with products that are shipped and certainly not talk about them." by Rob Enderle -4.380863
"Apple no longer owns the tablet market, and will likely lose dominance this year or next. ... this level of sustained dominance doesn't appear to recur with the same vendor even if it launched the category." by Rob Enderle -4.397954
"Apple is becoming more and more like a typical tech firm — that is, long on technology and short on magic. ... Apple is drifting closer and closer to where it was back in the 1990s. It offers advancements that largely follow those made by others years earlier, product proliferation, a preference for more over simple elegance, and waning excitement." by Rob Enderle -4.448473
"[T]he litigation between Qualcomm and Apple/Intel ... is weird. What makes it weird is that Intel appears to think that by helping Apple drive down Qualcomm prices, it will gain an advantage, but since its only value is as a lower cost, lower performing, alternative to Qualcomm's modems, the result would be more aggressively priced better alternatives to Intel's offerings from Qualcomm/Broadcom, wiping Intel out of the market. On paper, this is a lose/lose for Intel and even for Apple. The lower prices would flow to Apple competitors as well, lowering the price of competing phones. So, Apple would not get a lasting benefit either." by Rob Enderle -4.469852 Ronald McDonald of Apple, he is the face." by Rob Enderle -3.822949
"With phones moving to technologies such as Apple Pay, an unwillingness to assure security could create a Target-like exposure that wipes Apple out of the market." by Rob Enderle -3.849055
"*:From Joint Apple / HP press release dated 1 January 2004 available [http://www.apple.com/pr/library/2004/jan/08hp.html here]." by Carly Fiorina -3.895163
"Samsung did to Apple what Apple did to Microsoft, skewering its devoted users and reputation, only better. ... There is a way for Apple to fight back, but the company no longer has that skill, and apparently doesn't know where to get it, either." by Rob Enderle -4.052616
"*** The previous line contains the naughty word '$&'.\n if /(ibm|apple|awk)/; # :-)" by Larry Wall -4.088445
"The cause of [Apple v. Qualcomm] appears to be an effort by Apple to pressure Qualcomm into providing a unique discount, largely because Apple has run into an innovation wall, is under increased competition from firms like Samsung, and has moved to a massive cost reduction strategy. (I've never known this to end well, as it causes suppliers to create unreliable components and outright fail.)" by Rob Enderle -4.169533
"[T]he litigation between Qualcomm and Apple/Intel ... is weird. What makes it weird is that Intel appears to think that by helping Apple drive down Qualcomm prices, it will gain an advantage, but since its only value is as a lower cost, lower performing, alternative to Qualcomm's modems, the result would be more aggressively priced better alternatives to Intel's offerings from Qualcomm/Broadcom, wiping Intel out of the market. On paper, this is a lose/lose for Intel and even for Apple. The lower prices would flow to Apple competitors as well, lowering the price of competing phones. So, Apple would not get a lasting benefit either." by Rob Enderle -4.197869
"Apple tends to aggressively work to not discover problems with products that are shipped and certainly not talk about them." by Rob Enderle -4.204618
"Today's tech companies aren't built to last, as Apple's recent earnings report shows all too well." by Rob Enderle -4.209901
"[W]hen it came to the iWatch, also a name that Apple didn't own, Apple walked away from it and instead launched the Apple Watch. Certainly, no risk of litigation, but the product's sales are a fraction of what they otherwise might have been with the proper name and branding." by Rob Enderle -4.238582

Conclusions

In this article, we explored Wikiquote and created a LDA model using Algoritm::LDA.
After that we built an information retrieval application.

Thanks for reading my article! See you next time!

Citations

License

Perl 6 Advent Calendar: Day 23 – Blin, it’s Christmas soon!

Published by AlexDaniel on 2018-12-23T00:00:15

I’ve already mentioned Bisectable in one of the advent posts two years ago, but since then a lot has changed, so I think it’s time to give a brief history of the bisectable bot and its friends.

First of all, let’s define the problem that is being solved. Sometimes it happens that a commit introduces an unintended change in behavior (a bug). Usually we call that a regression, and in some cases the easiest way to figure out what went wrong and fix it is to first find which commit introduced the regression.

There are exactly 9000 commits between Rakudo 2015.12 and 2018.12, and even though it’s not over 9000, that’s still a lot.


Luckily, we don’t need to test all of the revisions. Assuming that the behavior wasn’t changing back and forth all the time, we can use binary search.

git bisect and binary search

Basically, given any commit range, we take a commit in the “middle” of the range and test it. If it’s “bad” or if it shows the “new” (now incorrect) behavior, then we can throw away the second half of our range (because we know that the change must have happened before that commit or exactly on that commit). Similarly we throw away the other half if it is “good” (or “old”). So instead of testing all 9000 commits we can just check about log n revisions (≈13).

Git comes with git bisect command which implements the binary search logic for you. All you have to do is give it some starting points and then for every commit it jumps to, tell if it is good/bad. If you do that enough times, it’ll tell you which commit is at fault.

That’s all good, but there are two problems with it.

Problem ❶: Skipping

Let’s imagine a situation where 2 + 2 used to return 4 (correct!), but now returns 42 (… also right, but not quite).

So you kick off the bisection process, git jumps between revisions, you test them. If it’s 4 then it’s good (or old), if it’s 42 then it is bad (or new). But then you stumble upon this behavior:

> 2 + 2

Merry Christmas!

… Now what? Clearly that specific revision is somewhat special. We can’t tell if our bug is present or not, we simply can’t know. Yes, it doesn’t print 4, but we are looking for a very specific bug, so it doesn’t classify as “new” behavior either. Of course, we can toss a coin and mark it randomly as old or new, and hope for a Christmas miracle… but that has a 50% probability (if we see only one of these) to divert the binary search into the wrong direction.

For these cases git provides a special skip command.

If you are testing manually, then it is somewhat straightforward to handle these revisions (as long as you remember that you should skip them). However, because of problem ❷, a lot of people are tempted to use git bisect run which automates the process with a script. It is possible to skip revisions using a script too (use exit code 125), but it is not that obvious how to figure out which revisions should be skipped.

Problem ❷: Build times

Let’s take the optimistic figure of 13 to estimate the amount of revisions that we are going to test. Remember that it doesn’t include commits that we will have to skip, and possibly other extra builds that we might want to test.

The amount of time it takes to build rakudo varies depending on the hardware, but let’s optimistically say that it takes us 2 minutes to build rakudo on a particular commit and test it.

13 × 2 = 26 (minutes)

That’s not very convenient, right? And if something goes wrong during the process… you start over, and then you wait.

Bisectable

In 2016, after seeing the pain of those who have to run git bisect manually (actually, mostly myself), I wondered:

<AlexDaniel> has anybody thought about building rakudo for every single commit, so that you can quickly run git bisect?

The cost-benefit analysis of the idea was promptly questioned:

<perlpilot> AlexDaniel: do you believe that bisects will be common in the future?

To which I provided a very detailed justification:

<AlexDaniel> perlpilot: yes

Three days later, the bot joined the channel. The reactions were quite interesting to see:

<moritz> woah
<tadzik> wow
<RabidGravy> OoOOOoooh
<llfourn> Cooooool

Little did we know back then. Even I had no idea how useful it will turn out. Fast forward 2 years:

<lizmat> with regards to size of commits: I try to keep them as small and contained as possible, to allow for easier bisecting
<lizmat> in that sense, bisectable6 has changed the way I code
<lizmat> also: bisectable6 has made me worry less about changes I commit
<lizmat> because it usually limits the places to look for fixing an issue so much, that they can be fixed within minutes rather than hours
<lizmat> or at least show the cause very quickly (so the short-time fix may mean a revert)
<AlexDaniel> \o/

But it wasn’t always perfect. About one hour after the introduction of the bot, it was used for its purpose:

<moritz> bisect: try { NaN.Rat == NaN; exit 0 }; exit 1
<bisectable> moritz: (2016-05-02) https://github.com/rakudo/rakudo/commit/949a7c7

However, because of an off-by-one, it returned the wrong commit. The actual commit was e2f1fa7, and 949a7c7 is its parent.

Honestly, the bot was very bad back then. For example, it fully relied on the exit code, so you couldn’t just throw 2 + 2 into it and expect it to check the output. Eventually, different modes were implemented, and nowadays the bot first checks the behavior on the starting points (e.g. 2015.12 and HEAD), and determines the best strategy to perform the bisection. For example, if the signal is different (e.g. a SEGV), then it bisects based on the signal. If the signal is same, but the exit code is different, then it uses the exit code. If all else can’t be used, it bisects using the output.

Keep in mind that bisectable checks for you if perl6 binary can’t be built. This means that in most cases you don’t need to add your own logic for skipping. Not only it brought the bisection time from tens of minutes to a few seconds, it also gives results that are more reliable/correct.

Storage

Some time later the commit range was expanded to 2014.01HEAD, meaning all commits starting from the first ever Rakudo on Moar release. Currently it has over 17000 builds. It may sound like a lot, but with every rakudo installation taking just ≈28 MB, that’s not too much. Having a few TB of storage should get you going for a few years to come.

That being said, I don’t have that luxury on my server. It has a RAID of 120 GB SSDs, so the whole thing not only has to fit into that little amount of space, but it should also leave enough space for the rest of the system.

There was a lot of experimentation (one, two) involved in figuring out the best strategy to save space, but long story short, we can go as low as about half a megabyte per build! Of course, it is always a tradeoff between the compression ratio and decompression speed, but using modern compression algorithms (zstd, lrzip) everything is relatively easy.

More bots, more better

Shortly after Bisectable was released, people saw an opportunity for other tools. Want to run some code on a specific commit? Sure, here’s a bot for that. Want to download a prebuilt rakudo archive instead of wasting your own cpu time? Yes, there’s another bot. Want to graph some info about rakudo? Of course there’s a bot for that!

And it continued until we reached the total of 17 bots! Some argue that these bots should stop multiplying like that, and perhaps people are right. But I guess the point is that now it is extremely easy to build upon Whateverable to create more tools for developers, which is of course great.

OK, now what?

So bisectable can bisect across thousands of commits in no time. It consumes very little storage space, and it doesn’t require full understanding of the bisection process from the user. Now that the bisection is free and easy, can we do more?

Yes, Blin!

You may have heard about Toaster. Toaster is a tool that attempts to install every module in the ecosystem on two or more revisions. For example, let’s say that the last release was 2018.12 and the release manager is about to cut a rakudo release from master HEAD. You can then run toaster on 2018.12 and master, and it will show which modules used to install cleanly but no longer do.

That gives us the information that something is likely wrong in Rakudo, but doesn’t tell what exactly. Given that this post was mostly about Bisectable, you can probably guess where this is going.

Project Blin – Toasting Reinvented

Blin is a quality assurance tool for Rakudo releases. It is used to find regressions in rakudo, but unlike Toaster, not only it tells which modules are no longer installable, it also bisects rakudo to find out which commit caused the issue. Of course, it is built around Whateverable, so that extra functionality doesn’t cost much (and doesn’t even require a lot of code). As a bonus, it generates nice graphs to visualize how the issue propagates from module dependencies (though that is not very common).

One important feature of Blin is that it tries to install every module just once. So if module B depends on module A, A will be tested and installed once, and then reused for the testing of B. Because this process is parallelized, you may wonder how it was implemented. Basically, it uses the underrated react/whenever feature:

# slightly simplified
react {
    for @modules -> $module {
        whenever Promise.allof($module.depends.keys».done) {
            start { process-module $module, … }
        }
    }
}

For every module (we have more than 1200 now) it creates its own whenever block which fires when its dependencies are satisfied. In my opinion, that’s the whole implementation of the main logic in Blin, everything else is just glue to get Whateverable and Zef working together to achieve what we need, + some output generation.

In some way, Blin didn’t change much in the way we do quality assurance for Rakudo. Toaster was already able to give us some basic info (albeit slower) so that we could start the investigation, and in the past I was known for shoving weird things (e.g. full modules with dependencies) into bisectable. It’s just that now it is much easier, and when The Day comes, I won’t be punished for robot abuse.

Future

Whateverable and Blin together have 243 open issues. Both projects work great and are very useful, but as we say, they are Less Than Awesome. Most issues are relatively easy and fun to work with, but they require time. If there’s anything you can help with or if you want to maintain these projects, please feel free to do so. And if you want to build your own tools based on Whateverable (which we probably need a lot!), see this hello world gist.

🎅🎄, 🥞

Perl 6 Advent Calendar: Day 22 – Testing Cro HTTP APIs

Published by jnthnwrthngtn on 2018-12-22T00:00:06

A good amount of my work time this year has been spent on building a couple of Perl 6 applications. After a decade of contributing to Perl 6 compiler and runtime development, it feels great to finally be using it to deliver production solutions solving real-world problems. I’m still not sure whether writing code in an IDE I founded, using a HTTP library I designed, compiled by a compiler I implemented large parts of, and running on a VM that I play architect for, makes me one of the world’s worst cases of “Not Invented Here”, or just really Full Stack.

Whatever I’m working on, I highly value automated testing. Each passing test is something I know works – and something that I won’t break as I evolve the software in question. Even with automated tests, bugs happen, but adding a test to cover the bug at least means I’ll make different bugs in the future, which is perhaps a bit more forgivable.

Most of the code, and complexity, in the system I’m currently working on is in its domain objects. Those are reached through a HTTP API, implemented using Cro – and like the rest of the system, this HTTP API has automated tests. They use one old module of mine – Test::Mock – along with a new module released this year, Cro::HTTP::Test. In today’s advent post, I’ll discuss how I’ve been using them together, with results that I find quite pleasing.

A sample problem

It’s the advent calendar, so of course I need a sufficiently festive example problem. For me, one of the highlights of Christmas time in Central Europe is the Christmas markets, many set on beautiful historic city squares. And what, aside from sausage and mulled wine, do we need on that square? A tall, handsome Christmas tree, of course! But how to find the best tree? Well, we get the internet to help, by building a system where they can submit suggestions of trees they’ve seen that might be suitable. What could possibly go wrong?

One can PUT to a route /trees/{latitude}/{longitude} to submit a candidate tree at that location. The expected payload is a JSON blob with a tree height, and a text description of 10-200 characters explaining why the tree is so awesome. If a tree in the same location has already been submitted, a 409 Conflict response should be returned. If the tree is accepted, then a simple 200 OK response will be produced, with a JSON body describing the tree.

A GET of the same URI will return a description of the tree in question, while a GET to /trees will return the submitted trees, tallest first.

Testability

Back in highschool, science classes were certainly among my favorite. Now and then, we got to do experiments. Of course, each experiment needed writing up – both the planning before, the results, and an analysis of them. One of the most important parts of the planning was about how to ensure a “fair test”: how would we try control all of the things we weren’t trying to test, so that we could trust in our observations and draw conclusions from them?

Testing in software involves much the same thought process: how do we exercise the component(s) we’re interested in, while controlling the context they operate in? Sometimes, we get lucky, and we’re testing pure logic: it doesn’t depend on anything other than the things we give it to work with. In fact, we can create our own luck in this regard, spotting parts of our system that can be pure functions or immutable objects. To take examples from the current system I’m working on:

So, the first thing to do for testability is to find the bits of the system that can be like this and build them that way. Alas, not all things are so simple. HTTP APIs are often a gateway to mutable state, database operations, and so forth. Further, a good HTTP API will map error conditions from the domain level into appropriate HTTP status codes. We’d like to be able to create such situations in our tests, so as to cover them. This is where a tool like Test::Mock comes in – but to use it, we need to factor our Cro service in a way that is test-friendly.

Stubbing a service

For those new to Cro, let’s take a look at the bare minimum we can write to get a HTTP service up and running, serving some fake data about trees.

use Cro::HTTP::Router;
use Cro::HTTP::Server;

my $application = route {
    get -> 'trees' {
        content 'application/json', [
            {
                longitude => 50.4311548,
                latitude => 14.586079,
                height => 4.2,
                description => 'Nice color, very bushy'
            },
            {
                longitude => 50.5466504,
                latitude => 14.8438714,
                height => 7.8,
                description => 'Really tall and wide'
            },
        ]
    }
}

my $server = Cro::HTTP::Server.new(:port(10000), :$application);
$server.start;
react whenever signal(SIGINT) {
    $server.stop;
    exit;
}

This isn’t a great setup for being able to test our routes, however. Better would be to put the routes into a subroutine in a module lib/BestTree.pm6:

unit module BestTree;
use Cro::HTTP::Router;

sub routes() is export {
    route {
        get -> 'trees' {
            content 'application/json', [
                {
                    longitude => 50.4311548,
                    latitude => 14.586079,
                    height => 4.2,
                    description => 'Nice color, very bushy'
                },
                {
                    longitude => 50.5466504,
                    latitude => 14.8438714,
                    height => 7.8,
                    description => 'Really tall and wide'
                },
            ]
        }
    }
}

And use it from the script:

use BestTree;
use Cro::HTTP::Server;

my $application = routes();
my $server = Cro::HTTP::Server.new(:port(10000), :$application);
$server.start;
react whenever signal(SIGINT) {
    $server.stop;
    exit;
}

Now, if we had something that could be used to test that route blocks do the right thing, we could use this module, and get on with our testing.

Stores, models, etc.

There’s another problem, however. Our Christmas tree service will be stashing the tree information away in some database, as well as enforcing the various rules. Where should this logic go?

There’s many ways we might choose to arrange this code, but the key thing is that this logic does not belong in our Cro route handlers. Their job is to map between the domain objects and the world of HTTP, for example turning domain exceptions into appropriate HTTP error responses. That mapping is what we’ll want to test.

So, before we continue, let’s define how some of those things look. We’ll have a BestTree::Tree class that represents a tree:

class BestTree::Tree {
    has Rat $.latitude;
    has Rat $.longitude;
    has Rat $.height;
    has Str $.description;
}

And we’ll work with a BestTree::Store object. We won’t actually implement this as part of this post; it will be what we fake in our tests.

class BestTree::Store {
    method all-trees() { ... }
    method suggest-tree(BestTree::Tree $tree --> Nil) { ... }
    method find-tree(Rat $latitude, Rat $longitude --> BestTree::Tree) { ... }
}

But how can we arrange things so we can take control of the store that is used by the routes, for testing purposes? One easy way is to make it a parameter to our routes subroutine, meaning it will be available in the route block:

sub routes(BestTree::Store $store) is export {
    ...
}

This is a functional factoring. Some folks may prefer to use some kind of OO-based Dependency Injection, using some kind of container. That can work fine with Cro too: just have a method that returns the route block. (If building something non-tiny with Cro, check out the documentation on structuring services for some further advice on this front.)

Getting a list of trees

Now we’re ready to start writing tests! Let’s stub the test file:

use BestTree;
use BestTree::Store;
use Cro::HTTP::Test;
use Test::Mock;
use Test;

# Tests will go here

done-testing;

We use BestTree, which contains the routes we want to test, along with:

Next, we’ll make a couple of tree objects to use in our tests:

my $fake-tree-a = BestTree::Tree.new:
        latitude => 50.4311548,
        longitude => 14.586079,
        height => 4.2,
        description => 'Nice color, very bushy';
my $fake-tree-b = BestTree::Tree.new:
        latitude => 50.5466504,
        longitude => 14.8438714,
        height => 7.8,
        description => 'Really tall and wide';

And here comes the first test:

subtest 'Get all trees' => {
    my $fake-store = mocked BestTree::Store, returning => {
        all-trees => [$fake-tree-a, $fake-tree-b]
    };
    test-service routes($fake-store), {
        test get('/trees'),
                status => 200,
                json => [
                    {
                        latitude => 50.4311548,
                        longitude => 14.586079,
                        height => 4.2,
                        description => 'Nice color, very bushy'
                    },
                    {
                        latitude => 50.5466504,
                        longitude => 14.8438714,
                        height => 7.8,
                        description => 'Really tall and wide'
                    }
                ];
        check-mock $fake-store,
                *.called('all-trees', times => 1, with => \());
    }
}

First, we make a fake of BestTree::Store that, whenever all-trees is called, will return the fake data we specify. We then use test-service, passing in the route block created with the fake store. All test calls within the block that follows will be executed against that route block.

Notice that we don’t need to worry about running a HTTP server here to host the routes we want to test. In fact, due to the pipeline architecture of Cro, it’s easily possible for us to take the Cro HTTP client, wire its TCP message output to put the data it would send into a Perl 6 Channel, and then have that data pushed into the server pipeline’s TCP message input pipeline, and vice versa. This means that we test things all the way down to the bytes that are sent and received, but without actually having to hit even the local network stack. (Aside: you can also use Cro::HTTP::Test with a URI, which means if you really wanted to spin up a test server, or even wanted to write tests against some other service running in a different process, you could do it.)

The test routine specifies a test case. Its first argument describes the request that we wish to perform – in this case, a get to /trees. The named arguments then specify how the response should look. The status check ensures we get the expected HTTP status code back. The json check is really two in one:

If that’s all we did, and we ran our tests, we’d find they mysteriously pass, even though we didn’t yet edit our route block’s get handler to actually use the store! Why? Because it turns out I was lazy and used the data from my earlier little server example as my test data here. No worries, though: to make the test stronger, we can add a call to check-mock, and then assert that our fake store really did have the all-trees method called once, and with no arguments passed.

That just leaves us to make the test pass, by implementing the handler properly:

get -> 'trees' {
    content 'application/json', [
        $store.all-trees.map: -> $tree {
            {
                latitude => $tree.latitude,
                longitude => $tree.longitude,
                height => $tree.height,
                description => $tree.description
            }
        }
    ]
}

Getting a tree

Time for the next test: getting a tree. There are two cases to consider here: the one where the tree is found, and the one where the tree is not found. Here’s a test for the case where a tree is found:

subtest 'Get a tree that exists' => {
    my $fake-store = mocked BestTree::Store, returning => {
        find-tree => $fake-tree-b
    };
    test-service routes($fake-store), {
        test get('/trees/50.5466504/14.8438714'),
                status => 200,
                json => {
                    latitude => 50.5466504,
                    longitude => 14.8438714,
                    height => 7.8,
                    description => 'Really tall and wide'
                };
        check-mock $fake-store,
                *.called('find-tree', times => 1, with => \(50.5466504, 14.8438714));
    }
}

Running this now fails. In fact, the status code check fails first, because we didn’t implement the route yet, and so get 404 back, not the expected 200. So, here’s an implementation to make it pass:

        get -> 'trees', Rat() $latitude, Rat() $longitude {
            given $store.find-tree($latitude, $longitude) -> $tree {
                content 'application/json', {
                    latitude => $tree.latitude,
                    longitude => $tree.longitude,
                    height => $tree.height,
                    description => $tree.description
                }
            }
        }

Part of this looks somewhat familiar from the other route, no? So, with two passing tests, let’s go forth and refactor:

get -> 'trees' {
    content 'application/json',
            [$store.all-trees.map(&tree-for-json)];
}

get -> 'trees', Rat() $latitude, Rat() $longitude {
    given $store.find-tree($latitude, $longitude) -> $tree {
        content 'application/json', tree-for-json($tree);
    }
}

sub tree-for-json(BestTree::Tree $tree --> Hash) {
    return {
        latitude => $tree.latitude,
        longitude => $tree.longitude,
        height => $tree.height,
        description => $tree.description
    }
}

And the tests pass, and we know our refactor is good. But wait, what about if there is no tree there? In that case, the store will return Nil. We’d like to map that into a 404. Here’s another test:

subtest 'Get a tree that does not exist' => {
    my $fake-store = mocked BestTree::Store, returning => {
        find-tree => Nil
    };
    test-service routes($fake-store), {
        test get('/trees/50.5466504/14.8438714'),
                status => 404;
        check-mock $fake-store,
                *.called('find-tree', times => 1, with => \(50.5466504, 14.8438714));
    }
}

Which fails, in fact, with a 500 error, since we didn’t consider that case in our route block. Happily, this one is easy to deal with: turn out given into a with, which checks we got a defined object, and then add an else and produce the 404 Not Found response.

get -> 'trees', Rat() $latitude, Rat() $longitude {
    with $store.find-tree($latitude, $longitude) -> $tree {
        content 'application/json', tree-for-json($tree);
    }
    else {
        not-found;
    }
}

Submitting a tree

Last but not least, let’s test the route for suggesting a new tree. Here’s the successful case:

subtest 'Suggest a tree successfully' => {
    my $fake-store = mocked BestTree::Store;
    test-service routes($fake-store), {
        my %body = description => 'Awesome tree', height => 4.25;
        test put('/trees/50.5466504/14.8438714', json => %body),
                status => 200,
                json => {
                    latitude => 50.5466504,
                    longitude => 14.8438714,
                    height => 4.25,
                    description => 'Awesome tree'
                };
        check-mock $fake-store,
                *.called('suggest-tree', times => 1, with => :(
                    BestTree::Tree $tree where {
                        .latitude == 50.5466504 &&
                        .longitude == 14.8438714 &&
                        .height == 4.25 &&
                        .description eq 'Awesome tree'
                    }
                ));
    }
}

This is mostly familiar, except the check-mock call looks a little different this time. Test::Mock lets us test the arguments in two different ways: with a Capture (as we’ve done so far) or with a Signature. The Capture case is great for all of the simple cases, where we’re just dealing with boring values. However, once we get in to reference types, or if we don’t actually care about exact values and just want to assert the things we care about, a signature gives us the flexibility to do that. Here, we use a where clause to check that the tree object that the route handler has constructed contains the expected data.

Here’s the route handler that does just that:

put -> 'trees', Rat() $latitude, Rat() $longitude {
    request-body -> (Rat(Real) :$height!, Str :$description!) {
        my $tree = BestTree::Tree.new: :$latitude, :$longitude,
                :$height, :$description;
        $store.suggest-tree($tree);
        content 'application/json', tree-for-json($tree);
    }
}

Notice how Cro lets us use Perl 6 signatures to destructure the request body. In one line, we’ve said:

Should any of those fail, Cro will automatically produce a 400 bad request for us. In fact, we can write tests to cover that – along with a new test to make sure a conflict will result in a 409.

subtest 'Problems suggesting a tree' => {
    my $fake-store = mocked BestTree::Store, computing => {
        suggest-tree => {
            die X::BestTree::Store::AlreadySuggested.new;
        }
    }
    test-service routes($fake-store), {
        # Missing or bad data.
        test put('/trees/50.5466504/14.8438714', json => {}),
                status => 400;
        my %bad-body = description => 'ok';
        test put('/trees/50.5466504/14.8438714', json => %bad-body),
                status => 400;
        %bad-body<height> = 'grinch';
        test put('/trees/50.5466504/14.8438714', json => %bad-body),
                status => 400;

        # Conflict.
        my %body = description => 'Awesome tree', height => 4.25;
        test put('/trees/50.5466504/14.8438714', json => %body),
                status => 409;
    }
}

The main new thing here is that we’re using computing instead of returning with mocked. In this case, we pass a block, and it will be executed. (The block does not get the method arguments, however. If we want to get those, there is a third option, overriding, where we get to take the arguments and write a fake method body.)

And how to handle this? By making our route handler catch and map the typed exception:

put -> 'trees', Rat() $latitude, Rat() $longitude {
    request-body -> (Rat(Real) :$height!, Str :$description!) {
        my $tree = BestTree::Tree.new: :$latitude, :$longitude,
                :$height, :$description;
        $store.suggest-tree($tree);
        content 'application/json', tree-for-json($tree);
        CATCH {
            when X::BestTree::Store::AlreadySuggested {
                conflict;
            }
        }
    }
}

Closing thoughts

With Cro::HTTP::Test, there’s now a nice way to write HTTP tests in Perl 6. Put together with a testable design, and perhaps a module like Test::Mock, we can also isolate our Cro route handlers from everything else, easing their testing.

The logic in our route handlers here is relatively straightforward; small sample problems usually are. Even here, however, I find there’s value in the journey, rather than only in the destination. The act of writing tests for a HTTP API puts me in the frame of mind of whoever will be calling the API, which can be a useful perspective to have. Experience also tells that tests “too simple to fail” do end up catching mistakes: the kinds of mistakes I might assume I’m too smart to make. Discipline goes a long way. On which note, I’ll now be disciplined about taking a break from the keyboard now and then, and go enjoy a Christmas market. -Ofun!

Perl 6 Advent Calendar: Day 21 – A Red Secret Santa

Published by SmokeMachine on 2018-12-20T23:01:49

The year is ending and we have a lot to celebrate! What is a better way to celebrate the end of the year than with our family and friends? To help achieve that, here at my home, we decided to run a Secret Santa Game! So, my goal is to write a Secret Santa Program! That’s something where I can use this wonderful project called Red.

Red is an ORM (Object Relational Model) for perl6 still under development and not published as a module yet. But it’s growing and it is close to a release.

So let’s create our first table: a table that will store the people participating in our Secret Santa. To the code:

Red maps relational databases to OOP. Each table is mapped to a Red class (model), each of whose objects represents a row.

The way we a create a model is by using the model special word. A model is just a normal class that extends Red::Model and has a MetamodelX::Red::Model‘s object as its metaclassRed does not add any methods you didn’t explicitly create to its models. So to interact with the database you should use the metaclass.

But let’s continue.

The code creates a new model called Person. The name of the table this model represents will be the same name as the model: “Person”. If necessary, you can change the name of the table with the is table<...> trait (for example: model Person is table<another_name> {...}).

This model has 3 attributes:

Red uses not null columns by default, so if you want to create a nullable columns you should use is column{ :nullable }.

So all attributes on Person are columns. The is serial (I mean the :id part) means that it’s the table’s primary key.

After that it’s setting a dynamic variable ($*RED-DB) for the result of database "SQLite". The database sub receives the driver‘s name and the parameters it expects.

In this case it uses the SQLite driver and if you don’t pass any argument, it will use it as an in memory database. If you want to use a file named secret-santa.db as the database file, you can do database "SQLite", :database<secret-santa.db>. Or, if you want to use a local Postgres, just use  database "Pg"Red uses the variable $*RED-DB to know what database to use.

OK, now lets create the table! As I said before, Red did not add any methods you didn’t explicitly ask for. So, to create the table a metaclassmethod is used. Person.^create-table is how you create the table.

This will run:

Now we should insert some data. We do that with another meta method (.^create). The .^create meta method expects the same arguments .new expects. Each named argument will set an attribute with the same name. .^create will create a new Person object, save it in the database (with .^save: :insert), and return it.

It runs:

Every model has a ResultSeq. That is a sequence that represents every row on the table. We can get its ResultSeq with .^all (or .^rs). ResultSeq has some methods to help you to get information from the table, for example: .grep will filter the rows (as it does in a normal Seq) but it doesn’t do that in memory, it returns a new ResultSeq with that filter set. When its iterator is retrieved, it runs a SQL query using everything set on the ResultSeq.

In our example, Person.^all.grep(*.email.defined).map: *.name will run a query like:

And it’ll print:

Fernando
Aline

Okay, we have a code that can save who is entered in our Secret Santa game. But each one on it want different gifts. How can we know the wishes of each one?

Let’s modify the code to make it save the wishlist for everyone participating in the secret santa:

That prints:

Fernando
    Comma => https://commaide.com
    perl6 books => https://perl6book.com
    mac book pro => https://www.apple.com/shop/buy-mac/macbook-pro/15-inch-space-gray-2.6ghz-6-core-512gb#

Aline
    a new closet => https://i.pinimg.com/474x/02/05/93/020593b34c205792a6a7fd7191333fc6--wardrobe-behind-bed-false-wall-wardrobe.jpg

Fernanda
    mimikyu plush => https://www.pokemoncenter.com/mimikyu-poké-plush-%28standard-size%29---10-701-02831
    camelia plush => https://farm9.static.flickr.com/8432/28947786492_80056225f3_b.jpg

Sophia
    baby alive => https://www.target.com/p/baby-alive-face-paint-fairy-brunette/-/A-51304817

Now we have a new model Wishlist that refers to a table named wishlist. It has $!id as id$!name and $!link are columns, and there are some things new! has UInt $!wisher-id is referencing{ Person.id }; is the same as has UInt $!wisher-id is column{ :references{ Person.id } }; that means it’s a column that’s a foreign key that references the id Person‘s column. It also has a has Person $.wisher is relationship{ .wisher-id }; it’s not a column, it’s a “virtual” field. the $ sigil means that there is only 1 wisher for a wish. And is relationship expects a Callable that will receive a model. If it’s Scalar it will receive the current model as the only argument. So, in this case, it will be Wishlist. The return of the relationsip’s Callable must be a column that references some other column.

Lets see how this table is created:

As you can see, no wisher column is created.

The Person model has changed too! Now it has a @.wishes relationship (has Wishlist @.wishes is relationship{ .wisher-id }). It uses a @ sigil so each Person can have more than one wish. The Callable passed will receive the type of the Positional attribute (Wishlist on this case) and must return a column that references some other column.

The table created is the same as before.

We created a new Person as we did before: my \fernando = Person.^create: :name<Fernando>, :email<fco@aco.com>; and now we can use the relationship (wishes) to create a new wish (fernando.wishes.create: :name<Comma>, :link<https://commaide.com&gt;). That creates a new wish for Fernando running the following SQL:

Had you seen? wisher_id is 1… 1 is Fernando’s id. Once you have created the wish from Fernando’s .wishes(), it already knows that it belongs to Fernando.

And then we define wishes for every person we create.

Then we loop over every Person in the database (Person.^all) and print its name and loop over that person’s wishes and print its name and link.

Okey, we can save who is participating… get what they want… but the draw? Who should I give a gift to? To do that we change our program again:

Now Person has two new attributes ($!pair-id and $.pair) and a new method (draw). $!pair-id is a foreign key that references the field id on the same table (Person) so we have to use an alias (.^alias). The other one is the relationship ($.pair) that uses that foreign key.

The new method (draw) is where the magic happens. It uses the method .pick: * that on a normal Positional would shuffle the list. And it does the same here, with the query:

Once we have the shuffled list, we use .rotor to get two items and go one back, so we save what is the pair of each person giving to the next person, and the last person in the list will give to the first person.

And this is the output of our final code:

Fernando -> Sophia
Wishlist: baby alive
Aline -> Fernanda
Wishlist: mimikyu plush, camelia plush
Fernanda -> Fernando
Wishlist: COMMA, perl6 books, mac book pro
Sophia -> Aline
Wishlist: a new closet

As a bonus, let’s check out the track Red is going to follow. This is a current working code:

And this is the SQL it runs:

And prints:

Fernando => fco@aco.com
Aline => aja@aco.com
Fernanda
Sophia

Perlgeek.de: Perl 6 Coding Contest 2019: Seeking Task Makers

Published by Moritz Lenz on 2018-11-10T23:00:01

I want to revive Carl Mäsak's Coding Contest as a crowd-sourced contest.

The contest will be in four phases:

For the first phase, development of tasks, I am looking for volunteers who come up with coding tasks collaboratively. Sadly, these volunteers, including myself, will be excluded from participating in the second phase.

I am looking for tasks that ...

This is non-trivial, so I'd like to have others to discuss things with, and to come up with some more tasks.

If you want to help with task creation, please send an email to moritz.lenz@gmail.com, stating your intentions to help, and your freenode IRC handle (optional).

There are other ways to help too:

In these cases you can use the same email address to contact me, or use IRC (moritz on freenode) or twitter.

my Timotimo \this: Rakudo Perl 6 performance analysis tooling: Progress Report

Published by Timo Paulssen on 2018-11-10T00:22:45

Rakudo Perl 6 performance analysis tooling: Progress Report

I have started properly working on the grant later than I had hoped, and have not been able to invest as much time per week as I had initially planned. However, recently more and more parts of the tool have come together to form a useful product.

Recently, core developer Stefan 'nine' Seifert has been able to make good use of the tool to assist in analyzing and optimizing the performance of a re-write of MoarVM's bytecode compiler in NQP code. Here are quotes from Stefan on IRC, along with links to the public IRC logs:

https://colabti.org/irclogger/irclogger_log/perl6-dev?date=2018-10-09#l56

I now managed to get a profile of the MAST stage using a patched --profile-stage, but the profile is too large even for the QT frontend :/
timotimo++ # moarperf works beautifully with this huge profile :)

https://colabti.org/irclogger/irclogger_log/perl6-dev?date=2018-10-23#l66

timotimo++ # the profiler is indespensable in getting nqp-mbc merge worthy.

Below you can find a copy of the "Deliverables and Inchstones" section of the original grant proposal with explanations of the current status/progress and any additional features I've been able to implement, along with a couple of screenshots to illustrate what I mean.

The "web frontend for heap snapshot analyzer" and "bunch of examples" sections have had their sub-headings removed, because work on that area has not yet started.

Deliverables and Inchstones

A blog with progress reports.

There have been 6 reports, mostly due to a long starting delay and some additional intervention from Life Itself:

A web frontend for the heap snapshot analyzer

I have pushed this off to the future, opting to prioritize the instrumented profiler frontend instead.

A new web frontend for the instrumented profiler

Rakudo Perl 6 performance analysis tooling: Progress Report

Rakudo Perl 6 performance analysis tooling: Progress Report

Rakudo Perl 6 performance analysis tooling: Progress Report

Rakudo Perl 6 performance analysis tooling: Progress Report

Rakudo Perl 6 performance analysis tooling: Progress Report

Rakudo Perl 6 performance analysis tooling: Progress Report

Rakudo Perl 6 performance analysis tooling: Progress Report

User-facing documentation on using the profiling facilities and interpreting the data

This has not yet happened, except for the blog post for the release that explains how to run the profiler, and the README.

Build a bunch of examples that show different performance issues and describe how to figure them out

There are currently big changes going on in MoarVM's optimizer relating to removing unnecessary boxing of intermediate values in calculations and such, which may make any examples moot real soon. I've delayed this part of the work a bit for that reason.

In Summary ...

I was positively surprised when I last opened the original tasks list and saw what was already done, and what could be finished with just a little bit of work!

I hope you'll give moarperf a try some time soon and let me know what you think! Here's a link to the project on github: MoarPerf project on github

Thanks for staying around and reading my blog posts :)
  - Timo

my Timotimo \this: Where did I leave my AT-KEYs?

Published by Timo Paulssen on 2018-11-09T21:17:24

Where did I leave my AT-KEYs?

Even though it's only been a week and a half since my last report, there's already enough new stuff to report on! Let's dig right in.

Where did I leave my AT-KEYs?
Photo by NeONBRAND / Unsplash

Overview Page

Where did I leave my AT-KEYs?

Is there terribly much to say about this? It now shows the same data that was already available in the old profiler's overview page. It's the go-to page when you've changed your code in the hopes of making it faster, or changed your version of rakudo in hopes of not having to change your code in order to make it faster.

Here's a screenshot from the other profiler for comparison:

Where did I leave my AT-KEYs?

The main addition over the previous version is the "start times of threads" piece at the top left. In multi-threaded programs it shows you when more threads were added, for example if you use start blocks on the default ThreadPoolScheduler.

The GC Performance section gives you not only the average time spent doing minor and major collections, but also the minimum and maximum time.

The rest is pretty much the same, except the new version puts separators in numbers with more than three digits, which I find much easier on the eyes than eight-digit numbers without any hints to guide the eye.

GC Run List

Where did I leave my AT-KEYs?

The graphs at the top of the GC tab has changed a bit! There's now the option to show only major, only minor, or all collections in the graph, and there are three different display modes for the "Amounts of data" graphs.

The one shown by default gives bars split into three colors to signify how much of the nursery's content has been freed (green), kept around for another round (orange), or promoted to the old generation (red). That's the mode you can see in the screenshot above.

Where did I leave my AT-KEYs?

The second mode is "Combined Chart", which just stacks the amounts in kilobytes on top of each other. That means when more threads get added, the bars grow. In this example screenshot, you can barely even see orange or red in the bars, but this program is very light on long-lived allocations.

Where did I leave my AT-KEYs?

The third mode is "Split Charts", which has one chart for each "color". Since they all have their own scales, you can more easily see differences from run to run, even if some of the charts appear tiny in the "combined" or "combined relative" charts.

Routines List

The routines overview – and actually all lists of routines in the program – have a new little clickable icon now. Here it is:

Where did I leave my AT-KEYs?

The icon I'm talking about is the little up-right-arrow in a little box after a routine's name. When you click on it, the row turns blue. Huh, that doesn't sound so useful? That's because the button brings you to the routines list and scrolls to and highlights the routine you've clicked on. If you're already right there, you will not notice a lot of change, of course.

However, it gets more interesting in the callers or callees lists:

Where did I leave my AT-KEYs?

Even better, since it actually uses links to actual URLs, the browser's back/forward buttons work with this.

Other useful places you can find this navigation feature are the allocations list and the call graph explorer:

Where did I leave my AT-KEYs?

Where did I leave my AT-KEYs?

Where are my AT-KEYs at?

If you have a very big profile, a routine you're interested in may be called in many, many places. Here's a profile of "zef list". Loading up the call graph for this routine may just explode my computer:

Where did I leave my AT-KEYs?

Note the number of Sites: 27 thousand. Not good.

But what if you're already in the call graph explorer anyway, and you just want to find your way towards functions that call your routine?

Enter the search box:

Where did I leave my AT-KEYs?

As you can see, when you input something in the search bar, hand icons will point you towards your destination in the call graph.

I'm looking to add many more different kinds of searches, for example I can imagine it would be interesting to see at a glance "which branches will ever reach non-core code". Searching for files ought to also be interesting.

Another idea I've had is that when you've entered a search term, it should be possible to exclude specific results, for example if there are many routines with the same name, but some of them are not the ones you mean. For example, "identity" is in pretty much every profile, since that's what many "return"s will turn into (when there's neither a decont nor a type check needed). However, Distributions (which is what zef deals in) also have an "identity" attribute, which is about name, version, author, etc.

At a much later point, perhaps even after the grant has finished, there could also be search queries that depend on the call tree's shape, for example "all instances where &postcircumfix:{ } is called by &postcircumfix:{ }".

That's it?

Don't worry! I've already got an extra blog post in the works which will be a full report on overall completion of the grant. There'll be a copy of the original list (well, tree) of the "deliverables and inchstones" along with screenshots and short explanations.

I hope you're looking forward to it! I still need to replace the section that says "search functionality is currently missing" with a short and sweet description of what you read in the previous section :)

With that I wish you a good day and a pleasant weekend
  - Timo

Zoffix Znet: Perl 6 Advent Calendar 2018 Call for Authors

Published on 2018-10-31T00:00:00

Write a blog post about Perl 6

Zoffix Znet: A Request to Larry Wall to Create a Language Name Alias for Perl 6

Published on 2018-10-07T00:00:00

The culmination of the naming discussion

6guts: Speeding up object creation

Published by jnthnwrthngtn on 2018-10-06T22:59:11

Recently, a Perl 6 object creation benchmark result did the rounds on social media. This Perl 6 code:

class Point {
    has $.x;
    has $.y;
}
my $total = 0;
for ^1_000_000 {
    my $p = Point.new(x => 2, y => 3);
    $total = $total + $p.x + $p.y;
}
say $total;

Now (on HEAD builds of Rakudo and MoarVM) runs faster than this roughly equivalent Perl 5 code:

use v5.10;

package Point;

sub new {
    my ($class, %args) = @_;
    bless \%args, $class;
}

sub x {
    my $self = shift;
    $self->{x}
}

sub y {
    my $self = shift;
    $self->{y}
}

package main;

my $total = 0;
for (1..1_000_000) {
    my $p = Point->new(x => 2, y => 3);
    $total = $total + $p->x + $p->y;
}
say $total;

(Aside: yes, I know there’s no shortage of libraries in Perl 5 that make OO nicer than this, though those I tried also made it slower.)

This is a fairly encouraging result: object creation, method calls, and attribute access are the operational bread and butter of OO programming, so it’s a pleasing sign that Perl 6 on Rakudo/MoarVM is getting increasingly speedy at them. In fact, it’s probably a bit better at them than this benchmark’s raw numbers show, since:

While dealing with Int values got faster recently, it’s still really making two Int objects every time around that loop and having to GC them later. An upcoming new set of analyses and optimizations will let us get rid of that cost too. And yes, startup will get faster with time also. In summary, while Rakudo/MoarVM is now winning that benchmark against Perl 5, there’s still lots more to be had. Which is a good job, since the equivalent Python and Ruby versions of that benchmark still run faster than the Perl 6 one.

In this post, I’ll look at the changes that allowed this benchmark to end up faster. None of the new work was particularly ground-breaking; in fact, it was mostly a case of doing small things to make better use of optimizations we already had.

What happens during construction?

Theoretically, the default new method in turn calls bless, passing the named arguments along. The bless method then creates an object instance, followed by calling BUILDALL. The BUILDALL method goes through the set of steps needed for constructing the object. In the case of a simple object like ours, that involves checking if an x and y named argument were passed, and if so assigning those values into the Scalar containers of the object attributes.

For those keeping count, that’s at least 3 method calls (newbless, and BUILDALL).

However, there’s a cheat. If bless wasn’t overridden (which would be an odd thing to do anyway), then the default new could just call BUILDALL directly anyway. Therefore, new looks like this:

multi method new(*%attrinit) {
    nqp::if(
      nqp::eqaddr(
        (my $bless := nqp::findmethod(self,'bless')),
        nqp::findmethod(Mu,'bless')
      ),
      nqp::create(self).BUILDALL(Empty, %attrinit),
      $bless(|%attrinit)
    )
}

The BUILDALL method was originally a little “interpreter” that went through a per-object build plan stating what needs to be done. However, for quite some time now we’ve instead compiled a per-class BUILDPLAN method.

Slurpy sadness

To figure out how to speed this up, I took a look at how the specializer was handling the code. The answer: not so well. There were certainly some positive moments in there. Of note:

However, the new method was getting only a “certain” specialization, which is usually only done for very heavily polymorphic code. That wasn’t the case here; this program clearly constructs overwhelmingly one type of object. So what was going on?

In order to produce an “observed type” specialization – the more powerful kind – it needs to have data on the types of all of the passed arguments. And for the named arguments, that was missing. But why?

Logging of passed argument types is done on the callee side, since:

The argument logging was done as the interpreter executed each parameter processing instruction. However, when there’s a slurpy, then it would just swallow up all the remaining arguments without logging type information. Thus the information about the argument types was missing and we ended up with a less powerful form of specialization.

Teaching the slurpy handling code about argument logging felt a bit involved, but then I realized there was a far simpler solution: log all of the things in the argument buffer at the point an unspecialized frame is being invoked. We’re already logging the entry to the call frame at that point anyway. This meant that all of the parameter handling instructions got a bit simpler too, since they no longer had logging to do.

Conditional elimination

Having new being specialized in a more significant way was an immediate win. Of note, this part:

      nqp::eqaddr(
        (my $bless := nqp::findmethod(self,'bless')),
        nqp::findmethod(Mu,'bless')
      ),

Was quite interesting. Since we were now specializing on the type of self, then the findmethod could be resolved into a constant. The resolution of a method on the constant Mu was also a constant. Therefore, the result of the eqaddr (same memory address) comparison of two constants should also have been turned into a constant…except that wasn’t happening! This time, it was simple: we just hadn’t implemented folding of that yet. So, that was an easy win, and once done meant the optimizer could see that the if would always go a certain way and thus optimize the whole thing into the chosen branch. Thus the new method was specialized into something like:

multi method new(*%attrinit) {
    nqp::create(self).BUILDALL(Empty, %attrinit),
}

Further, the create could be optimized into a “fast create” op, and the relatively small BUILDALL method could be inlined into new. Not bad.

Generating simpler code

At this point, things were much better, but still not quite there. I took a look at the BUILDALL method compilation, and realized that it could emit faster code.

The %attrinit is a Perl 6 Hash object, which is for the most part a wrapper around the lower-level VM hash object, which is the actual hash storage. We were, curiously, already pulling this lower-level hash out of the Hash object and using the nqp::existskey primitive to check if there was a value, but were not doing that for actually accessing the value. Instead, an .AT-KEY('x') method call was being done. While that was being handled fairly well – inlined and so forth – it also does its own bit of existence checking. I realized we could just use the nqp::atkey primitive here instead.

Later on, I also realized that we could do away with nqp::existskey and just use nqp::atkey. Since a VM-level null is something that never exists in Perl 6, we can safely use it as a sentinel to mean that there is no value. That got us down to a single hash lookup.

By this point, we were just about winning the benchmark, but only by a few percent. Were there any more easy pickings?

An off-by-one

My next surprise was that the new method didn’t get inlined. It was within the size limit. There was nothing preventing it. What was going on?

Looking closer, it was even worse than that. Normally, when something is too big to be inlined, but we can work out what specialization will be called on the target, we do “specialization linking”, indicating which specialization of the code to call. That wasn’t happening. But why?

Some debugging later, I sheepishly fixed an off-by-one in the code that looks through a multi-dispatch cache to see if a particular candidate matches the set of argument types we have during optimization of a call instruction. This probably increased the performance of quite a few calls involving passing named arguments to multi-methods – and meant new was also inlined, putting us a good bit ahead on this benchmark.

What next?

The next round of performance improvements – both to this code and plenty more besides – will come from escape analysis, scalar replacement, and related optimizations. I’ve already started on that work, though expect it will keep me busy for quite a while. I will, however, be able to deliver it in stages, and am optimistic I’ll have the first stage of it ready to talk about – and maybe even merge – in a week or so.

brrt to the future: A future for fork(2)

Published by Bart Wiegmans on 2018-10-04T05:18:00

Hi hackers. Today I want to write about a new functionality that I've been developing for MoarVM that has very little to do with the JIT compiler. But it is still about VM internals so I guess it will fit.

Many months ago, jnthn wrote a blog post on the relation between perl 5 and perl 6. And as a longtime and enthusiastic perl 5 user - most of the JIT's compile time support software is written in perl 5 for a reason - I wholeheartedly agree with the 'sister language' narrative. There is plenty of room for all sorts of perls yet, I hope. Yet one thing kept itching me:
Moreover, it’s very much the case that Perl 5 and Perl 6 make different trade-offs. To pick one concrete example, Perl 6 makes it easy to run code across multiple threads, and even uses multiple threads internally (for example, performing optimization and JIT compilation on a background thread). Which is great…except the only winning move in a game involving both threads and fork() is not to play. Sometimes one just can’t have their cake and eat it, and if you’re wanting a language that more directly gives you your POSIX, that’s probably always going to be a strength of Perl 5 over Perl 6.
(Emphasis mine). You see, I had never realized that MoarVM couldn't implement fork(), but it's true. In POSIX systems, a fork()'d child process inherits the full memory space, as-is, from its parent process. But it inherits only the forking thread. This means that any operations performed by any other thread, including operations that might need to be protected by a mutex (e.g. malloc()), will be interrupted and unfinished (in the child process). This can be a problem. Or, in the words of the linux manual page on the subject:

       *  The child process is created with a single thread—the one that
called fork(). The entire virtual address space of the parent is
replicated in the child, including the states of mutexes,
condition variables, and other pthreads objects; the use of
pthread_atfork(3) may be helpful for dealing with problems that
this can cause.

* After a fork() in a multithreaded program, the child can safely
call only async-signal-safe functions (see signal-safety(7)) until
such time as it calls execve(2).

Note that the set of signal-safe functions is not that large, and excludes all memory management functions. As noted by jnthn, MoarVM is inherently multithreaded from startup, and will happily spawn as many threads as needed by the program. It also uses malloc() and friends rather a lot. So it would seem that perl 6 cannot implement POSIX fork() (in which both parent and child program continue from the same position in the program) as well as perl 5 can.

I was disappointed. As a longtime (and enthusiastic) perl 5 user, fork() is one of my favorite concurrency tools. Its best feature is that parent and child processes are isolated by the operating system, so each can modify its own internal state without concern for concurrency,. This can make it practical to introduce concurrency after development, rather than designing it in from the start. Either process can crash while the other continues. It is also possible (and common) to run a child process with reduced privileges relative to the parent process, which isn't possible with threads. And it is possible to prepare arbitrarily complex state for the child process, unlike spawn() and similar calls.

Fortunately all hope isn't necessarily lost. The restrictions listed above only apply if there are multiple threads active at the moment that fork() is executed. That means that if we can stop all threads (except for the one planning to fork) before actually calling fork(), then the child process can continue safely. That is well within the realm of possibility, at least as far as system threads are concerned.

The process itself isn't very exciting to talk about, actually - it involves sending stop signals to system threads, waiting for these threads to join, verifying that the forking thread is the really only active thread, and restarting threads after fork(). Because of locking, it is a bit subtle (tbere may be another user thread that is also trying to fork), but not actually very hard. When I finally merged the code, it turned out that an ancient (and forgotten) thread list modification function was corrupting the list by not being aware of generational garbage collection... oops. But that was simple enough to fix (thanks to timotimo++ for the hint).

And now the following oneliner should work on platforms that support fork():(using development versions of MoarVM and Rakudo):

perl6 -e 'use nqp; my $i = nqp::fork(); say $i;'

The main limitation of this work is that it won't work if there are any user threads active. (If there are any, nqp::fork() will throw an exception). The reason why is simple: while it is possible to adapt the system threads so that I can stop them on demand, user threads may be doing arbitrary work, hold arbitrary locks and may be blocked (possibly indefinitely) on a system call. So jnthn's comment at the start of this post still applies - threads and fork() don't work together.

In fact, many programs may be better off with threads. But I think that languages in the perl family should let the user make that decision, rather than the VM. So I hope that this will find some use somewhere. If not, it was certainly fun to figure out. Happy hacking!


PS: For the curious, I think there may in fact be a way to make fork() work under a multithreaded program, and it relies on the fact that MoarVM has a multithreaded garbage collector. Basically, stopping all threads before calling fork() is not so different from stopping the world during the final phase of garbage collection. And so - in theory - it would be possible to hijack the synchronization mechanism of the garbage collector to pause all threads. During interpretation, and in JIT compiled code, each thread periodically checks if garbage collection has started. If it has, it will synchronize with the thread that started GC in order to share the work. Threads that are currently blocked (on a system call, or on acquiring a lock) mark themselves as such, and when they are resumed always check the GC status. So it is in fact possible to force MoarVM into a single active thread even with multiple user threads active. However, that still leaves cleanup to deal with, after returning from fork() in the child process. Also, this will not work if a thread is blocked on NativeCall. Altogether I think abusing the GC in order to enable a fork() may be a bit over the edge of insanity :-)

6guts: Eliminating unrequired guards

Published by jnthnwrthngtn on 2018-09-29T19:59:28

MoarVM’s optimizer can perform speculative optimization. It does this by gathering statistics as the program is interpreted, and then analyzing them to find out what types and callees typically show up at given points in the program. If it spots there is at least a 99% chance of a particular type showing up at a particular program point, then it will optimize the code ahead of that point as if that type would always show up.

Of course, statistics aren’t proofs. What about the 1% case? To handle this, a guard instruction is inserted. This cheaply checks if the type is the expected one, and if it isn’t, falls back to the interpreter. This process is known as deoptimization.

Just how cheap are guards?

I just stated that a guard cheaply checks if the type is the expected one, but just how cheap is it really? There’s both direct and indirect costs.

The direct cost is that of the check. Here’s a (slightly simplified) version of the JIT compiler code that produces the machine code for a type guard.

/* Load object that we should guard */
| mov TMP1, WORK[obj];
/* Get type table we expect and compare it with the object's one */
MVMint16 spesh_idx = guard->ins->operands[2].lit_i16;
| get_spesh_slot TMP2, spesh_idx;
| cmp TMP2, OBJECT:TMP1->st;
| jne >1;
/* We're good, no need to deopt */
| jmp >2;
|1:
/* Call deoptimization handler */
| mov ARG1, TC;
| mov ARG2, guard->deopt_offset;
| mov ARG3, guard->deopt_target;
| callp &MVM_spesh_deopt_one_direct;
/* Jump out to the interpreter */
| jmp ->exit;
|2:

Where get_spesh_slot is a macro like this:

|.macro get_spesh_slot, reg, idx;
| mov reg, TC->cur_frame;
| mov reg, FRAME:reg->effective_spesh_slots;
| mov reg, OBJECTPTR:reg[idx];
|.endmacro

So, in the case that the guard matches, it’s 7 machine instructions (note: it’s actually a couple more because of something I omitted for simplicity). Thus there’s the cost of the time to execute them, plus the space they take in memory and, especially, the instruction cache. Further, one is a conditional jump. We’d expect it to be false most of the time, and so the CPU’s branch predictor should get a good hit rate – but branch predictor usage isn’t entirely free of charge either. Effectively, it’s not that bad, but it’s nice to save the cost if we can.

The indirect costs are much harder to quantify. In order to deoptimize, we need to have enough state to recreate the world as the interpreter expects it to be. I wrote on this topic not so long ago, for those who want to dive into the detail, but the essence of the problem is that we may have to retain some instructions and/or forgo some optimizations so that we are able to successfully deoptimize if needed. Thus, the presence of a guard constrains what optimizations we can perform in the code around it.

Representation problems

A guard instruction in MoarVM originally looked like:

sp_guard r(obj) sslot uint32

Where r(obj) is an object register to read containing the object to guard, the sslot is a spesh slot (an entry in a per-block constant table) containing the type we expect to see, and the uint32 indicates the target address after we deoptimize. Guards are inserted after instructions for which we had gathered statistics and determined there was a stable type. Things guarded include return values after a call, reads of object attributes, and reads of lexical variables.

This design has carried us a long way, however it has a major shortcoming. The program is represented in SSA form. Thus, an invoke followed by a guard might look something like:

invoke r6(5), r4(2)
sp_guard r6(5), sslot(42), litui32(64)

Where r6(5) has the return value written into it (and thus is a new SSA version of r6). We hold facts about a value (if it has a known type, if it has a known value, etc.) per SSA version. So the facts about r6(5) would be that it has a known type – the one that is asserted by the guard.

The invoke itself, however, might be optimized by performing inlining of the callee. In some cases, we might then know the type of result that the inlinee produces – either because there was a guard inside of the inlined code, or because we can actually prove the return type! However, since the facts about r6(5) were those produced by the guard, there was no way to talk about what we know of r6(5) before the guard and after the guard.

More awkwardly, while in the early days of the specializer we only ever put guards immediately after the instructions that read values, more recent additions might insert them at a distance (for example, in speculative call optimizations and around spesh plugins). In this case, we could not safely set facts on the guarded register, because those might lead to wrong optimizations being done prior to the guard.

Changing of the guards

Now a guard instruction looks like this:

sp_guard w(obj) r(obj) sslot uint32

Or, concretely:

invoke r6(5), r4(2)
sp_guard r6(6), r6(5), sslot(42), litui32(64)

That is to say, it introduces a new SSA version. This means that we get a way to talk about the value both before and after the guard instruction. Thus, if we perform an inlining and we know exactly what type it will return, then that type information will flow into the input – in our example, r6(5) – of the guard instruction. We can then notice that the property the guard wants to assert is already upheld, and replace the guard with a simple set (which may itself be eliminated by later optimizations).

This also solves the problem with guards inserted away from the original write of the value: we get a new SSA version beyond the guard point. This in turn leads to more opportunities to avoid repeated guards beyond that point.

Quite a lot of return value guards on common operations simply go away thanks to these changes. For example, in $a + $b, where $a and $b are Int, we will be able to inline the + operator, and we can statically see from its code that it will produce an Int. Thus, the guard on the return type in the caller of the operator can be eliminated. This saves the instructions associated with the guard, and potentially allows for further optimizations to take place since we know we’ll never deoptimize at that point.

In summary

MoarVM does lots of speculative optimization. This enables us to optimize in cases where we can’t prove a property of the program, but statistics tell us that it mostly behaves in a certain way. We make this safe by adding guards, and falling back to the general version of the code in cases where they fail.

However, guards have a cost. By changing our representation of them, so that we model the data coming into the guard and after the guard as two different SSA versions, we are able to eliminate many guard instructions. This not only reduces duplicate guards, but also allows for elimination of guards when the broader view afforded by inlining lets us prove properties that we weren’t previously able to.

In fact, upcoming work on escape analysis and scalar replacement will allow us to start seeing into currently opaque structures, such as Scalar containers. When we are able to do that, then we’ll be able to prove further program properties, leading to the elimination of yet more guards. Thus, this work is not only immediately useful, but also will help us better exploit upcoming optimizations.

6guts: Faster box/unbox and Int operations

Published by jnthnwrthngtn on 2018-09-28T22:43:55

My work on Perl 6 performance continues, thanks to a renewal of my grant from The Perl Foundation. I’m especially focusing on making common basic operations faster, the idea being that if those go faster than programs composed out of them also should. This appears to be working out well: I’ve not been directly trying to make the Text::CSV benchmark run faster, but that’s resulted from my work.

I’ll be writing a few posts in on various of the changes I’ve done. This one will take a look at some related optimizations around boxing, unboxing, and common mathematical operations on Int.

Boxing and unboxing

Boxing is taking a natively typed value and wrapping it into an object. Unboxing is the opposite: taking an object that wraps a native value and getting the native value out of it.

In Perl 6, these are extremely common. Num and Str are boxes around num and str respectively. Thus, unless dealing with natives explicitly, working with floating point numbers and strings will involve lots of box and unbox operations.

There’s nothing particularly special about Num and Str. They are normal objects with the P6opaque representation, meaning they can be mixed into. The only thing that makes them slightly special is that they have attributes marked as being a box target. This indicates the attribute out as being the one to write to or read from in a box or unbox operation.

Thus, a box operations is something like:

While unbox is:

Specialization of box and unbox

box is actually two operations: an allocation and a store. We know how to fast-path allocations and JIT them relatively compactly, however that wasn’t being done for box. So, step one was to decompose this higher-level op into the allocation and the write into the allocated object. The first step could then be optimized in the usual way allocations are.

In the unspecialized path, we first find out where to write the native value to, and then write it. However, when we’re producing a specialized version, we almost always know the type we’re boxing into. Therefore, the object offset to write to can be calculated once, and a very simple instruction to do a write at an offset into the object produced. This JITs extremely well.

There are a couple of other tricks. Binds into a P6opaque generally have to check that the object wasn’t mixed in to, however since we just allocated it then we know that can’t be the case and can skip that check. Also, a string is a garbage-collectable object, and when assigning one GC-able object into another one, we need to perform a write barrier for the sake of generational GC. However, since the object was just allocated, we know very well that it is in the nursery, and so the write barrier will never trigger. Thus, we can omit it.

Unboxing is easier to specialize: just calculate the offset, and emit a simpler instruction to load the value from there.

I’ve also started some early work (quite a long way from merge) on escape analysis, which will allow us to eliminate many box object allocations entirely. It’s a great deal easier to implement this if allocations, reads, and writes to an object have a uniform representation. By lowering box and unbox operations into these lower level operations, this eases the path to implementing escape analysis for them.

What about Int?

Some readers might have wondered why I talked about Num and Str as examples of boxed types, but not Int. It is one too – but there’s a twist. Actually, there’s two twists.

The first is that Int isn’t actually a wrapper around an int, but rather an arbitrary precision integer. When we first implemented Int, we had it always use a big integer library. Of course, this is slow, so later on we made it so any number fitting into a 32-bit range would be stored directly, and only allocate a big integer structure if it’s outside of this range.

Thus, boxing to a big integer means range-checking the value to box. If it fits into the 32-bit range, then we can write it directly, and set the flag indicating that it’s a small Int. Machine code to perform these steps is now spat out directly by the JIT, and we only fall back to a function call in the case where we need a big integer. Once again, the allocation itself is emitted in a more specialized way too, and the offset to write to is determined once at specialization time.

Unboxing is similar. Provided we’re specializing on the type of the object to unbox, then we calculate the offset at specialization time. Then, the JIT produces code to check if the small Int flag is set, and if so just reads and sign extends the value into a 64-bit register. Otherwise, it falls back to the function call to handle the real big integer case.

For boxing, however, there was a second twist: we have a boxed integer cache, so for small integers we don’t have to repeatedly allocate objects boxing them. So boxing an integer is actually:

  1. Check if it’s in the range of the box cache
  2. If so, return it from the cache
  3. Otherwise, do the normal boxing operation

When I first did these changes, I omitted the use of the box cache. It turns out, however, to have quite an impact in some programs: one benchmark I was looking at suffered quite a bit from the missing box cache, since it now had to do a lot more garbage collection runs.

So, I reinstated use of the cache, but this time with the JIT doing the range checks in the produced machine code and reading directly out of the cache in the case of a hit. Thus, in the cache hit case, we now don’t even make a single function call for the box operation.

Faster Int operations

One might wonder why we picked 32-bit integers as the limit for the small case of a big integer, and not 64-bit integers. There’s two reasons. The most immediate is that we can then use the 32 bits that would be the lower 32 of a 64-bit pointer to the big integer structure as our “this is a small integer” flag. This works reliably as pointers are always aligned to at least a 4-byte boundary, so a real pointer to a big integer structure would never have the lowest bits set. (And yes, on big-endian platforms, we swap the order of the flag and the value to account for that!)

The second reason is that there’s no portable way in C to detect if a calculation overflowed. However, out of the basic math operations, if we have two inputs that fit into a 32-bit integer, and we do them at 64-bit width, we know that the result can never overflow the 64-bit integer. Thus we can then range check the result and decide whether to store it back into the result object as 32-bit, or to store it as a big integer.

Since Int is immutable, all operations result in a newly allocated object. This allocation – you’ll spot a pattern by now – is open to being specialized. Once again, finding the boxed value to operate on can also be specialized, by calculating its offset into the input objects and result object. So far, so familiar.

However, there’s a further opportunity for improvement if we are JIT-compiling the operations to machine code: the CPU has flags for if the last operation overflowed, and we can get at them. Thus, for two small Int inputs, we can simply:

  1. Grab the values
  2. Do the calculation at 32-bit width
  3. Check the flags, and store it into the result object if no overflow
  4. If it overflowed, fall back to doing it wider and storing it as a real big integer

I’ve done this for addition, subtraction, and multiplication. Those looking for a MoarVM specializer/JIT task might like to consider doing it for some of the other operations. :-)

In summary

Boxing, unboxing, and math on Int all came with various indirections for the sake of generality (coping with mixins, subclassing, and things like IntStr). However, when we are producing type-specialized code, we can get rid of most of the indirections, resulting in being able to perform them faster. Further, when we JIT-compile the optimized result into machine code, we can take some further opportunities, reducing function calls into C code as well as taking advantage of access to the overflow flags.

samcv: Adding and Improving File Encoding Support in MoarVM

Published on 2018-09-26T07:00:00

Encodings. They may seem to some as horribly boring and bland. Nobody really wants to have to worry about the details. Encodings are how we communicate, transmit data. Their purpose is to be understood by others. When they work, nobody should even know they are there. When they don’t — and everyone can probably remember a time when they have tried to open a poorly encoded or corrupted file and been disappointed — they cannot be ignored.

Here I talk about the work I have done in the past and work still in progress to improve the support of current encodings, add new encodings, and add new features and new options to allow Perl 6 and MoarVM to support more encodings and in a way which better achieves the goals encodings were designed to solve.

Table of Contents

Which Encodings Should We Add?

I started looking at the most common encodings on the Web. We supported the two most common, UTF-8 and ISO-8859-1 but we did not support windows-1251 (Cyrillic) or ShiftJIS. This seemed like the makings of a new project so I got to work.

I decided to start with windows-1251 since it was an 8 bit encoding while Shift-JIS was a variable length one or two byte encoding (the shift in the name comes from the second byte 'shift'). While the encoding itself was simpler than Shift-JIS, I was soon realizing some issues with both windows-1251 and our already supported windows-1252 encoding.

One of the first major issues I found in windows-1252, you could create files which could not be decoded by other programs. An example of this is codepoint 0x81 (129) which does not have a mapping in the windows-1252 specification. This would cause many programs to not detect the file as windows-1252, or to fail saying the file was corrupted and could not be decoded.

While our non-8bit encodings would throw an error if asked to encode text which did not exist in that codemapping, our 8 bit encodings would happily pass through invalid codepoints, as long as they fit in 8 bits.

As I said at the start, encodings are a way for us to communicate, to exchange data and to be understood. This to me indicated a major failing. While the solution could be to never attempt to write codepoints that don’t exist in the target encoding, that was not an elegant solution and would just be a workaround.

To remedy this I had to add new ops, such as decodeconf to replace decode op, and encodeconf to replace encode (plus many others). This would allow us to specify a configuration for our encodings and allow Perl 6 to tell MoarVM if it wanted to encode strictly according to the standard or to be more lenient.

I added a new :strict option to open, encode and a few others to allow you to specify if it should be encoding strict or not strict. In the needs of backwards compatibility it still defaults to leniently encoding and decoding. Strict is planned to become the default for Perl 6 specification 6.e.

Replacements

Some of our other encodings had support for 'replacements'. If you tried to encode something that would not fit in the target encoding, this allowed you to supply a string that could be one or more characters which would but substituted instead of causing MoarVM to throw an exception.

Once I had the strictness nailed down I was able to add support for replacements so one could have the ability to write data in a strict mode while still ensuring all compatible characters would get written properly, and you did not have to choose between unstrict and writing incompatible files and strict and having the encoding or file write failing when you really needed it not to.

Shift-JIS

Encodings are not without difficulty. As with the previous encodings I talked about, Shift-JIS would not be without decisions that had to be made. With Shift-JIS, the question became, "which Shift-JIS?".

You see, there are dozens of different extensions to Shift-JIS, in addition to original Shift-JIS encoding. As a variable length encoding, most of the one byte codepoints had been taken while there were many hundreds of open and unallocated codepoints in the two byte range. This saw the wide proliferation of manufacturer specific extensions which other manufacturers may adopt while adding their own extensions to the format.

Which of the dozens of different Shift-JIS encodings should we support? I eventually decided on windows-932 because that is the standard that is used by browsers when they encounter Shift-JIS text and encompasses a great many symbols. It is the most widely used Shift-JIS format. This would allow us to support the majority of Shift-JIS encoded documents out there, and the one which web browsers use on the web. Most but not all the characters in it are compatible with the other extensions, so it seemed like it was the sensible choice.

The only notable exception to this is that windows-932 is not totally compatible with the original Shift-JIS encoding. While the original windows-932 encoding (and some extensions to it) map the ASCII’s backslash to the yen symbol ¥, windows-932 keeps it to the backslash. This was to retain better compatibility with ASCII and other formats.

UTF-16

While MoarVM had a UTF-16 encoder and decoder, it was not fully featured.

  • You could encode a string into a utf16 buffer:

    • "hello".encode('utf16') #> utf16:0x<68 65 6c 6c 6f>

  • You could decode a utf16 buffer into a string:

    • utf16.new(0x68, 0x65, 0x6C, 0x6C, 0x6F).decode('utf16') #> hello

That was about it. You couldn’t open a filehandle as UTF-16 and write to it. You couldn’t even do $fh.write: "hello".encode('utf16') as the write function did not know how to deal with writing a 16 bit buffer (it expected an 8-bit buffer).

In addition there was no streaming UTF-16 decoder, so there was no way to read a UTF-16 file. So I set out to work, first allowing us to write 16-bit buffers to a file and then being able to .print a filehandle and write text to it.

At this point I knew that I would have to confront the BOM in the room, but decided to first implement the streaming decoder and ensure all of our file handle operations worked.

The BOM in the Room

You may be noticing a theme here. You do not just implement an encoding. It does not exist in a vacuum. While there may be standards, there may be multiple standards. And those standards may or may not be followed in practice or it may not be totally straightforward.

Endianess defines which order the bytes are in a number. Big endian machines will store a 16-bit number with the larger section first, similar to how we write numbers, while little endian machines will put the smallest section first. This only matters for encoding numbers that are more than one byte. UTF-16 can be either big endian or little endian. Meaning big and little endian files will contain the same bytes, but the order is different.

Since swapping the two bytes of a 16-bit integer causes a different integer instead of an invalid one, the creators of UTF-16 knew it was not possible to determine with certainty the endianess of a 16 bit number. And so the byte order mark was created, a codepoint that would be added at the start of a file to signify which endianess the file was. Since they didn’t want to break already existing software, they used a "Zero width no-break space" (ZWNBSP) and designated that a ZWNBSP with bytes reversed would be invalid in UTF-16, a "non-character".

If the BOM was not removed, it would not be visible since it is zero width. If your program opens a file and sees a byte order mark it knew it was in the correct endianess. If it’s the "non-character" then it knew it had to swap the bytes when it read the file.

The standard states that utf16 should always use a BOM and that when reading a file as utf16 the BOM should be used to detect the endianess. It states the BOM is not passed through as it is assumed to not be part of the actual data. If it doesn’t exist then the system is supposed to decode as the endianess that the context suggests. So you get a format where you may lose the first codepoint of your file if that codepoint happens to be a zero width non-break space.

For utf16be (big endian) and utf16le (little endian) the standard states the byte order mark should not exist, and that any zero width no-break space at the start of a file is just that, and should be passed through.

Standards and practice are not identical and in practice many programs will remove a leading zero width non-break space even if set explicitly to utf16be or utf16le. I was unsure which route I should take. I looked at how Perl 5 handled it and after thinking for a long time I decided that we would follow the spec. Even though other software will get rid of any detected byte order mark, I think it’s important that if a user specifies utf16be or utf16le, they will always get back the same codepoints that they write to a file.

Current Work

I am currently working on adding support so that when you write a file in 'utf16' it will add a byte order mark, as utf16 is supposed to always have one. Until this is added you can write a file with Perl 6 on one computer using the 'utf16' encoding which will not decode correctly using the same 'utf16' setting if the two computers are of different endianess.

Since there was no functionality for writing utf-16 to a file previously, there should be no backwards compatibility issues to worry about there and we can set it to add a byte order mark if the utf16 encoder is used.

Release 2018.09

In the last release utf16, utf16le and utf16be should work, aside from utf16 not adding a byte order mark on file write. Anyone who uses utf16 and finds any issues or comments are free to email me or make an issue on the MoarVM or Rakudo issue trackers.

Zoffix Znet: The 100 Day Plan: The Update on Perl 6.d Preparations

Published on 2018-08-09T00:00:00

Info on how 6.d release prep is going

Zoffix Znet: Introducing: Perl 6 Marketing Assets Web App

Published on 2018-08-05T00:00:00

Get your Perl 6 flyers and brochures

Zoffix Znet: Introducing: Newcomer Guide to Contributing to Core Perl 6

Published on 2018-08-02T00:00:00

Info on the new guide for newcomers

6guts: Redesigning Rakudo’s Scalar

Published by jnthnwrthngtn on 2018-07-26T23:54:45

What’s the most common type your Perl 6 code uses? I’ll bet you that in most programs you write, it’ll be Scalar. That might come as a surprise, because you pretty much never write Scalar in your code. But in:

my $a = 41;
my $b = $a + 1;

Then both $a and $b point to Scalar containers. These in turn hold the Int objects. Contrast it with:

my $a := 42;
my $b := $a + 1;

Where there are no Scalar containers. Assignment in Perl 6 is an operation on a container. Exactly what it does depending on the type of the container. With an Array, for example, it iterates the data source being assigned, and stores each value into the target Array. Assignment is therefore a copying operation, unlike binding which is a referencing operation. Making assignment the shorter thing to type makes it more attractive, and having the more attractive thing decrease the risk of action at a distance is generally a good thing.

Having Scalar be first-class is used in a number of features:

And probably some more that I forgot. It’s powerful. It’s also torture for those of us building Perl 6 implementations and trying to make them run fast. The frustration isn’t so much the immediate cost of the allocating all of those Scalar objects – that of course costs something, but modern GC algorithms can throw away short-lived objects pretty quickly – but also because of the difficulties it introduces for program analysis.

Despite all the nice SSA-based analysis we do, tracking the contents of Scalar containers is currently beyond that. Rather than any kind of reasoning to prove properties about what a Scalar holds, we instead handle it through statistics, guards, and deoptimization at the point that we fetch a value from a Scalar. This still lets us do quite a lot, but it’s certainly not ideal. Guards are cheap, but not free.

Looking ahead

Over the course of my current grant from The Perl Foundation, I’ve been working out a roadmap for doing better with optimization in the presence of Scalar containers. Their presence is one of the major differences between full Perl 6 and the restricted NQP (Not Quite Perl), and plays a notable part in the performance difference between the two.

I’ve taken the first big step towards improving this situation by significantly re-working the way Scalar containers are handled. I’ll talk about that in this post, but first I’d like to provide an idea of the overall direction.

In the early days of MoarVM, when we didn’t have specialization or compilation to machine code, it made sense to do various bits of special-casing of Scalar. As part of that, we wrote code handling common container operations in C. We’ve by now reached a point where the C code that used to be a nice win is preventing us from performing the analyses we need in order to do better optimizations. At the end of the day, a Scalar container is just a normal object with an attribute $!value that holds its value. Making all operations dealing with Scalar container really be nothing more than some attribute lookups and binds would allow us to solve the problem in terms of more general analyses, which stand to benefit many other cases where programs use short-lived objects.

The significant new piece of analysis we’ll want to do is escape analysis, which tells us which objects have a lifetime bounded to the current routine. We understand “current routine” to incorporate those that we have inlined.

If we know that an object’s usage lies entirely within the current routine, we can then perform an optimization known as scalar replacement, which funnily enough has nothing much to do with Scalar in the Perl 6 sense, even if it solves the problems we’re aiming to solve with Scalar! The idea is that we allocate a local variable inside of the current frame for each attribute of the object. This means that we can then analyze them like we analyze other local variables, subject them to SSA, and so forth. This for one gets rid of the allocation of the object, but also lets us replace attribute lookups and binds with a level of indirection less. It will also let us reason about the contents of the once-attributes, so that we can eliminate guards that we previously inserted because we only had statistics, not proofs.

So, that’s the direction of travel, but first, Scalar and various operations around it needed to change.

Data structure redesign

Prior to my recent work, a Scalar looked something like:

class Scalar {
    has $!value;        # The value in the Scalar
    has $!descriptor;   # rw-ness, type constraint, name
    has $!whence;       # Auto-vivification closure
}

The $!descriptor held the static information about the Scalar container, so we didn’t have to hold it in every Scalar (we usually have many instances of the same “variable” over a programs lifetime).

The $!whence was used when we wanted to do some kind of auto-vivification. The closure attached to it was invoked when the Scalar was assigned to, and then cleared afterwards. In an array, for example, the callback would bind the Scalar into the array storage, so that element – if assigned to – would start to exist in the array. There are various other forms of auto-vivification, but they all work in roughly the same way.

This works, but closures aren’t so easy for the optimizer to deal with (in short, a closure has to have an outer frame to point to, and so we can’t inline a frame that takes a closure). Probably some day we’ll find a clever solution to that, but since auto-vivification is an internal mechanism, we may as well make it one that we can see a path to making efficient in the near term future.

So, I set about considering alternatives. I realized that I wanted to replace the $!whence closure with some kind of object. Different types of object would do different kinds of vivification. This would work very well with the new spesh plugin mechanism, where we can build up a set of guards on objects. It also will work very well when we get escape analysis in place, since we can then potentially remove those guards after performing scalar replacement. Thus after inlining, we might be able to remove the “what kind of vivification does this assignment cause” checking too.

So this seemed workable, but then I also realized that it would be possible to make Scalar smaller by:

This not only makes Scalar smaller, but it means that we can use a single guard check to indicate the course of action we should take with the container: a normal assignment, or a vivification.

The net result: vivification closures go away giving more possibility to inline, assignment gets easier to specialize, and we get a memory saving on every Scalar container. Nice!

C you later

For this to be really worth it from an optimization perspective, I needed to eliminate various bits of C special-case code around Scalar and replace it with standard MoarVM ops. This implicated:

The first 3 became calls to code registered to perform the operations, using the 6model container API. The second two cases were handled by replacing the calls to C extops with desugars, which is a mechanism that takes something that is used as an nqp::op and rewrites it, as it is compiled, into a more interesting AST, which is then in turn compiled. Happily, this meant I could make all of the changes I needed to without having to go and do a refactor across the CORE.setting. That was nice.

So, now those operations were compiled into bytecode operations instead of ops that were really just calls to C code. Everything was far more explicit. Good! Alas, the downside is that the code we generate gets larger in size.

Optimization with spesh plugins

talked about specializer plugins in a recent post, where I used them to greatly speed up various forms of method dispatch. However, they are also applicable to optimizing operations on Scalar containers.

The change to decontainerizing return values was especially bad at making the code larger, since it had to do quite a few checks. However, with a spesh plugin, we could just emit a use of the plugin, followed by calling whatever the plugin produces.

Here’s a slightly simplified version of the the plugin I wrote, annotated with some comments about what it is doing. The key thing to remember about a spesh plugin is that it is not doing an operation, but rather it’s setting up a set of conditions under which a particular implementation of the operation applies, and then returning that implementation.

nqp::speshreg('perl6', 'decontrv', sub ($rv) {
    # Guard against the type being returned; if it's a Scalar then that
    # is what we guard against here (nqp::what would normally look at
    # the type inside such a container; nqp::what_nd does not do that).
    nqp::speshguardtype($rv, nqp::what_nd($rv));

    # Check if it's an instance of a container.
    if nqp::isconcrete_nd($rv) && nqp::iscont($rv) {
        # Guard that it's concrete, so this plugin result only applies
        # for container instances, not the Scalar type object.
        nqp::speshguardconcrete($rv);

        # If it's a Scalar container then we can optimize further.
        if nqp::eqaddr(nqp::what_nd($rv), Scalar) {
            # Grab the descriptor.
            my $desc := nqp::speshguardgetattr($rv, Scalar, '$!descriptor');
            if nqp::isconcrete($desc) {
                # Has a descriptor, so `rw`. Guard on type of value. If it's
                # Iterable, re-containerize. If not, just decont.
                nqp::speshguardconcrete($desc);
                my $value := nqp::speshguardgetattr($rv, Scalar, '$!value');
                nqp::speshguardtype($value, nqp::what_nd($value));
                return nqp::istype($value, $Iterable) ?? &recont !! &decont;
            }
            else {
                # No descriptor, so it's already readonly. Return as is.
                nqp::speshguardtypeobj($desc);
                return &identity;
            }
        }

        # Otherwise, full slow-path decont.
        return &decontrv;
    }
    else {
        # No decontainerization to do, so just produce identity.
        return &identity;
    }
});

Where &identity is the identity function, &decont removes the value from its container, &recont wraps the value in a new container (so an Iterable in a Scalar stays as a single item), and &decontrv is the slow-path for cases that we do not know how to optimize.

The same principle is also used for assignment, however there are more cases to analyze there. They include:

Vivifying hash assignments are not yet optimized by the spesh plugin, but will be in the near future.

The code selected by the plugin is then executed to perform the operation. In most cases, there will only be a single specialization selected. In that case, the optimizer will inline that specialization result, meaning that the code after optimization is just doing the required set of steps needed to do the work.

Next steps

Most immediately, a change to such a foundational part of the the Rakudo Perl 6 implementation has had some fallout. I’m most of the way through dealing with the feedback from toaster (which runs all the ecosystem module tests), being left with a single issue directly related to this work to get to the bottom of. Beyond that, I need to spend some time re-tuning array and hash access to better work with these changes.

Then will come the step that this change was largely in aid of: implementing escape analysis and scalar replacement, which for much Perl 6 code will hopefully give a quite notable performance improvement.

This brings me to the end of my current 200 hours on my Perl 6 Performance and Reliability Grant. Soon I will submit a report to The Perl Foundation, along with an application to continue this work. So, all being well, there will be more to share soon. In the meantime, I’m off to enjoy a week’s much needed vacation.

samcv: Secure Hashing for MoarVM to Prevent DOS Attacks

Published on 2018-05-16T07:00:00

Hashes are very useful data structures and underlie many internal representations in Perl 6 as well as being used as themselves. These data structures are very nice since they offer O(1) insertion time and O(1) lookup time on average. Hashes have long been considered an essential feature for Perl, much loved by users. Though when exploited, hashes can cause servers to grind to a halt. New in Rakudo Perl 6 2018.5 will be a feature called hash randomization which does much to help protect against this attack. In this article I explain some hashing basics as well as how the attack against non-randomized hashing can work.

Table of Contents

Hashing Basics

Some hashing basics: when we use a hash, we take a string and come up with a unique integer to represent the string. Similar to how md5 or sha1 sums take an arbitrary amount of data and condense it into a shorter number which can identify it, we do a similar thing for strings.

my %hash; %hash<foo> = 10

In this code, MoarVM takes the string foo and performs a hashing function on it using a series of bitwise operations. The goal is to create a shorter number which allows us to put the foo key into one of the 8 buckets that MoarVM initializes when a hash is created.

8 Hash buckets

Our hashing code sets up a predefined number of buckets . When a bucket fills up to have 10 items it doubles the number of buckets. In normal operation the hashes will be randomly distributed, so it would take ≈47 keys added (≈47 is the average number of items to result in one bucket being filled to 10 items) before we have to expand the buckets the first time.

When the buckets are expanded, we will now have 16 buckets. In normal operation our previous ≈47 items should be evenly distributed into those 16 buckets.

The Attack

Without a random hash seed it is easy for an attacker to generate strings which will result in the same hash. This devolves to O(n️2) time for the hash lookup. This O(n2) is actually O(string_length * num_collisions). When we have hash collisions, that means that no matter how many times we double the number of buckets we have, the strings which have hash collisions will always remain in the same bucket as each other. To locate the correct string, MoarVM must go down the chain and compare each hash value with the one we’re looking for. Since they are all the same, we must fall back to also checking each string itself manually until we find the correct string in that bucket.

Hash collision

This attack is done by creating a function that essentially is our hashing function backward (for those curious see here for an example of code which does forward and backward hashing for Chrome V8 engine’s former hashing function). We hash our target string, t. We then use random 3 character sequences (in our case graphemes) and plug them into our backward hashing function along with the hash for our target t. The backward hash and the random character sequence are stored in the dictionary and the process is repeated until we have a very large number of backward hash’s and random 3 grapheme prefixes.

We can then use this dictionary to construct successively longer strings (or short if we so desire) which are the same hash as our target string t. This is a simplification of how the Meet-In-The-Middle attack works.

This has been fixed in most programming languages (Python, Ruby, Perl), and several CVE’s have been issued over the years for this exploit (See CVE’s for PHP, OCaml, Perl, Ruby and Python).

Assuming everything is fine for the next release I will also merge changes which introduce a stronger hashing function called SipHash. SipHash is meant to protect against an attacker discovering a hash secret remotely. While randomizing the seed makes this attack much harder, a determined attacker could discover the hash and if that is done they can easily perform a meet in the middle attack. SipHash was designed to solve the vulnerability of the hash function itself to meet-in-the-middle attacks. Both the randomization of the hash secret in addition with a non-vulnerable hashing function work work together to avert hash collision denial of service attacks.

While the hash secret randomization will be out in Rakudo 2018.05, SipHash is planned to be introduced in Rakudo 2018.06.

Randomness Source

On Linux and Unix we prefer function calls rather than reading from /dev/urandom. There are some very important reasons for this.

Relying on an external file existing is potentially problematic. If we are in a chroot and /dev is not mounted we will not have access to /dev/urandom. /dev/urandom is not special, it can be deleted by accident (or on purpose) or a sparse data file mounted in its place undetectable by programs. Trusting it simply because of its path is not ideal. Also, if we exhaust the maximum number of open file descriptors we will be unable to open /dev/urandom as well.

System Functions

On Windows we use the pCryptGenRandom which is provided by advapi32.dll since Windows XP.

Linux, FreeBSD, OpenBSD and MacOS all use system provided random calls (if available) to get the data rather than having to open /dev/urandom. All these OS’s guarantee these calls to be non-blocking, though MacOS’s documentation does not comment on it. This is mostly important in very early userspace, which bit Python when a developer accidentally changed the randomness source causing systems which relied on very early Python scripts to stop booting due to waiting for randomness source to initialize.

If the function doesn’t exist we fall back to using /dev/urandom. If opening or reading it fails, on BSD’s we will use the arc4random() function. In many BSD’s this is seeded from the system’s random entropy pool, providing us with a back up in case /dev/urandom doesn’t exist.

On Linux we use the getrandom() system call which was added to kernel 3.17 instead of using the glibc wrapper since the glibc wrapper was added much later than to the kernel.

On MacOS, Solaris and FreeBSD we use getrandom() while on OpenBSD we use getentropy().

User Facing Changes

From Rakudo Perl 6 2018.05, the order that keys are returned will be random between each perl6 instance.

perl6 -e 'my %hash = <a 1 b 1 c 1 d 1 e 1 f 1>; say %hash.keys'
(d f c a b e)
perl6 -e 'my %hash = <a 1 b 1 c 1 d 1 e 1 f 1>; say %hash.keys'
(e f a d c b)

This will also effect iterating a hash without sorting: for %hash { }

What Do I Have To Do?

Users and module developers should make sure that they explicitly sort hashes and not rely on a specific order being constant. If you have a module, take a look at the code and see where you are iterating on a hash’s keys and whether or not the order of processing the hash’s keys affects the output of the program.

# This should be okay since we are putting the hash into another hash, order
# does not matter.
for %hash.keys -> $key {
    %stuff{$key} = $i++;
}
# This can potentially cause issues, depending on where `@stuff` is used.
for %hash.keys -> $key {
    @stuff.push: $key;
}
# This should be OK since we are using is-deeply and comparing a hash with another
# hash
is-deeply my-cool-hash-returning-function($input), %( foo => 'text', bar => 'text', baz => 'text');
# Probably best to avoid using `is`. The `is` test function converts the input to a string before
# checking for equality, but works since we stringify it in sorted order.
is %hash,  %( foo => 'text', bar => 'text', baz => 'text');
# NO. Keys are not guaranteed to be in the same order on each invocation
is %hash.keys, <a b c d>;

Module Developers

Module developers should check out the git master of Rakudo, or if 2018.05 has been released, use that to run the tests of your module. Make sure to run the tests multiple times, ideally at least 10 times or use a loop:

while prove -e 'perl6 -Ilib'; do true; done

This loop will run again and again until it encounters a test failure, in which case it will stop.

You must run your tests many times because the hash order will be different on each run. For hashes will a small number of items, it may not fail on every run. Make sure that you also look at the source to identify items that need fixing; don’t just rely on the test’s to tell you if you must make changes to your module.

Further Reading

Hardening Perl’s Hash Function, article by Booking.com about changes Perl 5 has made to harden hashing.