# pl6anet

Steve Mynott (Freenode: stmuk) steve.mynott (at)gmail.com / 2018-03-17T10:11:15

## 6guts: Remote Debug Support for MoarVM

Some years back, I worked on bringing basic debug support to Rakudo Perl 6. It works by taking the program AST and instrumenting it. This approach produced a somewhat useful result, but also carries a number of limitations:

• It required all code that is to be debugged to be recompiled, which gets slow when there are many modules – especially since there isn’t a way to cache this debug version. Back when I wrote it, all Perl 6 programs were pretty small. Today, larger systems are being built, with more significant module dependency chains.
• It caused a fairly significant slowdown to the code that was being debugged.
• It wasn’t useful to Rakudo core developers, and less useful than would be ideal for Perl 6 power users, as it could not debug into the built-ins and compiler. It’s great that so much of Perl 6 is written in Perl 6, but that was sadly off-limits for the existing debugger. There wasn’t much we could easily do to address that, either.
• There wasn’t any way to debug remotely. Technically, we probably could have built this, as the debugger had pluggable frontends. However, to my knowledge, the only one ever written was the CLI (Command Line Interface) that exists to this day.
• That CLI never learned to deal with threads. Given one of Perl 6’s selling points is its concurrency support, that’s a significant limitation.

### Enter the Edument Team

At Edument, the company where I work, we’re keen to support Perl 6 and drive it forward. Last summer, we released Cro. In the autumn, we started work on our next Perl 6 product – which we’ll be announcing in just a couple more weeks. Along the way, we realized that it really needed Perl 6 to have a better debugging solution. So what did we do?

We decided to pitch in and fund its development, of course! During the winter, we’ve worked on adding remote debug support to MoarVM, and today we’ve opened a pull request with it.

### Some details

With our additions, MoarVM can be started with a flag indicating it should run a debug server, along with a port that it should listen on. It can then either wait for a connection before doing anything, or run the program as normal but allow for a connection to be made later.

The debug protocol is defined in terms of MessagePack, which you can think of as being like a binary form of JSON. The PR includes documentation of the protocol, and we hope that having based it on an existing serialization format will make implementation of that easy for those who should wish to do so.

The implemented features available through the protocol include:

• Enumerating threads and, when they are suspended, getting their stack traces
• Reading the lexical variables of a callframe
• Setting breakpoints and getting notified if they are hit (and, optionally, suspending execution)
• Stepping
• Fetching object attributes, array elements, and hash keys/values

We’ve more plans for the future, but this is already a quite decent feature set that opens up a lot of possibilities.

### A Perl 6 library and basic debug CLI

Along with the debug server implementation in MoarVM, we’re also releasing a Perl 6 module that speaks the debug protocol, along with a basic, but useful, command line application built using it. These aren’t things that we directly needed, but we hope the CLI will be useful (and, of course, contributions are welcome – I’m sure I’ll be patching it some), and that the library will pave the way for further interesting tools that might be built in terms of the debug protocol.

### In Closing

All being well, the next MoarVM release will include remote debug support. With time, this will lead to much better debugging support for Perl 6, to keep pace with the growing size and concurrent nature of applications people are building in Perl 6 today. Enjoy!

## Weekly changes in and around Perl 6: 2018.11 Lockless Gems

During and shortly after a well-deserved holiday, Jonathan Worthington created some nice modules for concurrent, but lockless, data-structure primitives:

So, strictly this has not been work that is part of the Rakudo Perl 6 core, but they definitely have a core functionality feel to them! As many of the other modules which Jonathan already made.

## Perl 6 tidy

Jeff Goff has uploaded the very first public version of a perltidy for Perl 6. I’m sure it’s not going to be the last one: release early, release often!

## German Perl Workshop

The schedule of the German Perl Workshop 2018 (4 to 6 April) shows quite a number of Perl 6 related items:

And Andrew Shitov is giving three days of Perl 6 related training sessions:

So, this is your chance to be deeply immersed in Perl 6 for three whole days! And on the Saturday after the workshop, there will be a hackathon where no doubt some Rakudo Perl 6 hacking will take place!

## The Perl Conference in Salt Lake City

The Perl Conference 2018 (19 – 22 June) has released the latest newsletter. In it are shown the presentations that already have been accepted. Of those, the following are Perl 6 related:

You can still submit a (another) talk until the 28th of March!

## Core Developments

• Ticket status of past week.
• Zoffix Znet published the 3rd revision of his Polishing Rationals proposal. He also straightened up the handling of $/ with regards to Str.subst and Str.subst-mutate. • Daniel Green improved the optimizability of the infix: List operator for 2 elements. • Timo Paulssen made sure that a closing the last .tap on a Supply that was created by signal() (such as signal(SIGINT).tap: { }) will restore the original low-level signal handler. He also added a :datagram named parameter to IO::Socket::Async.udp.Supply to allow easy access to .hostname and .port of any received datagram. • Christian Bartolomäus continued his quest to de-bitrot the JVM backend. • Jan-Olof Hendig chased up quite a few old tickets, closing 7 and marking 22 as fixed, needing to have tests made to prevent regressions. • Elizabeth Mattijsen optimized the (elem) set operator for the 42 (elem) ^100 case so that it no longer depends on the size of the Range. • And many other smaller fixes and improvements. ## Blog Posts ## Meanwhile on Twitter ## Meanwhile on StackOverflow ## Meanwhile on perl6-users ## Perl 6 in comments ## Winding Down More and more development in Rakudo Perl 6 is not happening in the core anymore. This is a good thing! Yours truly will try to expand on the scope of reporting in the Perl 6 Weekly in the future. So please check in again next week for more and broader Perl 6 related news! ## Weekly changes in and around Perl 6: 2018.10 Pragmatic Perl ### Published by liztormato on 2018-03-05T20:35:53 Viacheslav Tykhanovskyi made all of the interviews he has done for the (Russian language) Pragmatic Perl website from 2013 to 2015 available in English as a single PDF for easy offline reading (Reddit comments). Although the interviews are at least 2 years old, they still feel very up-to-date. Of the 17 interviewees, these 10 had something to say about Perl 6: Sawyer X, Stevan Little, chromatic, Marc Lehmann, Tokuhiro Matsuno, Randal Schwartz, Ricardo Signes, Renée Bäcker, David Golden and Philippe Bruhat. A very interesting (at 120+ pages maybe long) read! Kudos to Viacheslav Tykhanovskyi! ## Rakudo Perl 6 in Alpine Rakudo Perl 6 is now part of the Alpine Linux distribution in edge/testing. Another step towards easy availability of Rakudo Perl 6 in the Linux world! ## Performance Analysis Tooling Timo Paulssen was finally able to start on his Rakudo Perl 6 Performance Analysis Tooling Grant. So now running your asynchronous code with --profile will produce some real results! He describes the progress in a blog post titled Delays and Delights. ## Curating And Improving Perl 6 documentation The TPF Grants Committee has approved JJ Merelo’s grant proposal for improving the Perl 6 documentation. Can’t wait to see the improvements to documentation that it will bring us! ## Polishing Rationals Zoffix Znet created a proposal to make Rationals work better in Rakudo Perl 6. Apart from making Rationals more consistent, he also expects to see some performance gains as well! And to make this all happen sooner rather than later, he drafted a Grant Proposal for the next round of TPF grants. ## Other Core Developments • Ticket status of past week. • Jonathan Worthington changed the extension of the setting files from .pm to .pm6 to follow the advice of the documentation. • TimToady decided that say() will not autothread. This after a long discussion on whether it should or not. • Zoffix Znet made Num.Bool about 9x faster. He also fixed a scoping issue with Blocks in regexes, and fixed .grep on HyperSeq/RaceSeq. But that’s not it yet: he also fixed an issue with NativeCall and precompiled modules. • Christian Bartolomäus again fixed various old and new issues specific to the JVM backend. • Elizabeth Mattijsen changed substr() to be a frontend to Str.substr, instead of vice-versa. She also made substr() upto 1.5x and Str.substr upto 3x faster. She did the same with substr-rw, which only got upto 20%/30% faster. She also made sure that Unicode aliases of several operators (≤, ≠, ≥ and −) are now just as fast as their ASCII counterparts. • And many other smaller fixes and improvements. ## Blog Posts ## Meanwhile in StackOverflow ## Meanwhile in Twitter ## Meanwhile on perl6-users ## Meanwhile on PerlMonks ## Perl 6 in comments ## Winding Down The weather has turned from record breaking cold for the time of the year, to a nice spring. In the matter of a day! Feels to me we’re going to see some exciting budding buds in the coming weeks, if the weather is any indication. So please check in again next week for more budding! ## my Timotimo \this: Delays and Delights ### Published by Timo Paulssen on 2018-03-04T23:24:34 Hi, my name is timotimo and I'm a Perl 6 developer. I've set up this blog to write reports on my TPF Grant. Before the actual report starts, I'd like to issue an apology. In between my grant application and the grant being accepted I developed a bit of RSI that lasted for multiple months. I already anticipated that near the end of January a move would occupy me a bit, but I had no idea how stressful it would turn out to be, and how long afterwards it would keep me occupied. Photo by Lyndsey B / Unsplash I regret that it took me so long to actually get started. However, I've already got little bits to show for the first report of this grant! # Delight №1: Non-crashy multi-threaded profiles Until now if you've added a signal handler for Ctrl-C to your program, or used run, shell, or even Proc::Async or any async IO, rakudo or moarvm will have started an additional thread for you. Even if you don't necessarily realize it, this caused the profiler to either violently explode, or return useless or partial results. Just these past few days I've made a bit of reliability work for the profiler available to everyone running rakudo from the latest source code. Now the profiler will only sometimes abort with the error message "profiler lost sequence" – a bug that I'm also going to chase down as part of the grant. On top of that, the HTML profiler frontend now shows timing and allocations from every thread. # Delight №2: Faster Heap Snapshots N.B.: I have done the work discussed in this section before the grant actually started, but I think it's still interesting for you. As you can tell from the – sadly mildly messed-up – grant application, I also work on a second profiler: The Heap Snapshot Profiler. It takes a snapshot of the structure of all objects in memory, and their connections. With this information, you can figure out what kinds of objects account for how much of your memory usage, and for what reason any given object is not being removed by the garbage collector. If you've already tried out this profiler in the past, you may have noticed that it incurs a significant memory usage increase during run-time. After that, it takes a surprising amount of time to load in the heap snapshot analyzer. This changed noticeably when I implemented a new format for heap snapshots. Until then, the format had one line (i.e. \n delimited) per snapshot (i.e. for each GC run) and a json-encoded list of strings up front. The snapshots themselves were then a few lines with lists of integers. This caused two little issues in terms of performance: • Encoding the list of strings as JSON had to be done in NQP code, because of the somewhat complex escaping rules.[1] • Reading the heap snapshot requires a whole lot of string splitting and integer parsing, which is comparatively slow. The binary-based format I replaced it with addresses both of these issues, and has a few extra features for speed purposes: • The list of strings is written as a 64 bit integer for the string length followed by the string itself encoded in utf-8. At the very front is a single 64 bit integer that holds the list's size. • Snapshot data is encoded with either 64 bit integers or a very simple variable-sized integer scheme. Each list stores its size at the beginning, too. • The very end of the file contains an index with the sizes of all the sections, so that skipping to any point of interest is cheap. Armed with the index at the end of the file, the binary format can be read by multiple threads at the same time, and that's exactly what the new Heap Snapshot Analyzer will do. In addition to the start positions of each section, there's also a pointer right into the middle of the one section that holds variable-sized entries. That way both halves can be read at the same time and cheaply combined to form the full result. The "summary all" command loads every snapshot, measures the total size of recorded objects, how many objects there are, and how many connections exist. The result is displayed as a table. Running this against a 1.1 gigabyte big example snapshot with 44 snapshots takes about a minute, uses up 3⅓ CPU cores out of the 4 my machine has, and maxes out at 3 gigs of ram used.[2] That's about 19 megabytes per second. It's not blazing fast, but it's decent. # Delight №3: Web Frontend Sadly, there isn't anything to show for this yet, but I've started a Cro application that lets the user load up a heap snapshot file and load individual snapshots in it. It doesn't seem like much at all, but the foundation on which the other bits will rest is laid now. I intend for the next posts to have a bunch of tiny animated gif screencasts for you to enjoy! # Delight №4: Faster Profile File Writing The SQL output will likely be the primary data storage format after my grant is done. Therefore I've taken first steps to optimize outputting the data. Right now it already writes out data about 30% faster during the actual writing stage. The patch needs a bit of polish, but then everyone can enjoy it! # Next steps The next thing I'd like to work towards is trying out the profiler with a diverse set of workloads: From hyper/race to a Cro application. That will give me a good idea of any pain points you'd encounter. For example, once the ThreadPoolScheduler's supervisor thread runs, it'll spend a majority of its time sleeping in between ticks. This shows up very prominently in the Routines list, to name one example. Of course, I take inspiration from my fellow Perl 6 developers and users on the IRC channel and the mailing lists, who offer a steady supply of challenges :) If you have questions, suggestions, or comments, please feel free to ping me on IRC, or post comments to this blog using the disqus I've embedded below. Thanks for reading; see you in the next one! - Timo 1. It could have been done in C code, but it feels weird to have JSON string escaping code built into MoarVM. ↩︎ 2. Figuring out why the ram usage is so high is also on my list. Amusingly, the heap analyzer can help me improve the heap analyzer! ↩︎ ## Perl 6 Inside Out: 🔬70. Examining the enum type in Perl 6 ### Published by andrewshitov on 2018-03-03T23:00:22 In Perl 6, you can create enumerations like this: enum colour <red orange yellow green blue violet>; Having this said, you can use the new name as a type name and create variables of that type: my colour$c;

$c = green; say$c;     # green
say $c.Int; # 3 As you would rightly expect, the type of the variable is very predictable: say$c.^name; # colour

Now, try to find the class implementation in Rakudo sources. Surprisingly, there is no file src/core/Enum.pm, but instead, there is src/core/Enumeration.pm. Looking at that file, you cannot say how our program works. Let us dig a bit.

In Grammar (src/Perl6/Grammar.nqp), you can find the following piece:

proto token type_declarator { <...> }

token type_declarator:sym<enum> {
. . .
}

So, the enum is not a name of the data type but a predefined keyword, one of a few that exist for type declarations (together with subset and constant).

The token starts with consuming the keyword and making some preparations, which are not very interesting for us at the moment:

<sym><.kok>
:my $*IN_DECL := 'enum'; :my$*DOC := $*DECLARATOR_DOCS; {$*DECLARATOR_DOCS := '' }
:my $*POD_BLOCK; :my$*DECLARAND;
{
my $line_no := HLL::Compiler.lineof(self.orig(), self.from(), :cache(1)); if$*PRECEDING_DECL_LINE < $line_no {$*PRECEDING_DECL_LINE := $line_no;$*PRECEDING_DECL := Mu; # actual declarand comes later, in Actions::type_declarator:sym<enum>
}
}
<.attach_leading_docs>

Then, we expect either a name of the new type or a variable or nothing(?):

[
| <longname>
{
. . .
}
| <variable>
| <?>
]

The variable part is not yet implemented:

> enum $x <a b c> ===SORRY!=== Error while compiling: Variable case of enums not yet implemented. Sorry. at line 2 Our test program falls to the first branch: <longname> { my$longname := $*W.dissect_longname($<longname>);
my @name := $longname.type_name_parts('enum name', :decl(1)); if$*W.already_declared($*SCOPE, self.package,$*W.cur_lexpad(), @name) {
$*W.throw($/, ['X', 'Redeclaration'],
symbol => $longname.name(), ); } } For example, if you declare enum colour, then the$longname.name() returns colour colour. Thus, we extracted it. (Also notice how redeclaration is handled.)

Finally, here is the rest of the token body:

{ $*IN_DECL := ''; } <.ws> <trait>* :my %*MYSTERY; [ <?[<(«]> <term> <.ws> || <.panic: 'An enum must supply an expression using <>, «», or ()'> ] <.explain_mystery> <.cry_sorrows> Indeed, we need to explain the mystery here. So, there’s room for optional traits, fine: <trait>* There’s another construct that should match to avoid panic messages: <?[<(«]> <term> <.ws> Don’t be confused by the different number of opening and closing angle brackets here. The first part is a forward assertion with a character class: <? [<(«] > It looks if there is one of the <, (, or « opening bracket at this position. The panic message is displayed if it is not found there. Our next expected guest is a term. Obviously, the whole part <red orange . . . violet> matches with it. Not that bad; what we need to do now is to understand what happens next. ## Perl 6 Inside Out: 🦋 69. Setting timeouts in Perl 6 ### Published by andrewshitov on 2018-03-03T11:00:44 In Perl 5, I used to set timeouts using signals (or, at least, that was an easy and predictable way). In Perl 6, you can use promises. Let us see how to do that. To imitate a long-running task, create an infinite loop that prints its state now and then. Here it is: for 1 .. * { .say if$_ %% 100_000;
}

As soon as the loop gets control, it will never quit. Our task is to stop the program in a couple of seconds, so the timer should be set before the loop:

Promise.in(2).then({
exit;
});

for 1 .. * {
.say if $_ %% 100_000; } Here, the Promise.in method creates a promise that is automatically kept after the given number of seconds. On top of that promise, using then, we add another promise, whose code will be run after the timeout. The only statement in the body here is exit that stops the main program. Run the program to see how it works: $ time perl6 timeout.pl
100000
200000
300000
. . .
3700000
3800000
3900000

real 0m2.196s
user 0m2.120s
sys 0m0.068s

The program counts up to about four millions on my computer and quits in two seconds. That is exactly the behaviour we needed.

For comparison, here is the program in Perl 5:

use v5.10;

alarm 2;
$SIG{ALRM} = sub { exit; }; for (my$c = 1; ; $c++) { say$c unless $c % 1_000_000; } (It manages to count up to 40 million, but that’s another story.) ## Perl 6 Inside Out: 🔬68. The smartness of the sequence operator in Perl 6, part 1 ### Published by andrewshitov on 2018-03-02T23:00:42 In Perl 6, you can ask the sequence operator to build a desired sequence for you. It can be arithmetic or geometric progression. All you need is to show the beginning of the sequence to Perl, for example: .say for 3, 5 ... 11; This prints numbers 3, 5, 7, 9, and 11. Or: .say for 2, 4, 8 ... 64; This code prints powers of 2 from 2 to 64: 2, 4, 8, 16, 32, and 64. I am going to try understanding how that works in Rakudo. First of all, look into the src/core/operators.pm file, which keeps a lot of different operators, including a few versions of the ... operator. The one we need looks really simple: multi sub infix:<...>(\a, Mu \b) { Seq.new(SEQUENCE(a, b).iterator) } Now, the main work is done inside the SEQUENCE sub. Before we dive there, it is important to understand what its arguments a and b receive. In the case of, say, 3, 5 ... 11, the first argument is a list 3, 5, and the second argument is a single value 11. These values land in the parameters of the routine: sub SEQUENCE(\left, Mu \right, :$exclude_end) {
. . .
}

What happens next is not that easy to grasp. Here is a screenshot of the complete function:

It contains about 350 lines of code and includes a couple of functions. Nevertheless, let’s try.

What you see first, is creating iterators for both left and right operands:

my \righti := (nqp::iscont(right) ?? right !! [right]).iterator;
my \lefti := left.iterator;

Then, the code loops over the left operand and builds an array @tail out of its data:

while !((my \value := lefti.pull-one) =:= IterationEnd) {
$looped = True; if nqp::istype(value,Code) {$code = value; last }
if $end_code_arity != 0 { @end_tail.push(value); if +@end_tail >=$end_code_arity {
@end_tail.shift xx (@end_tail.elems - $end_code_arity) unless$end_code_arity ~~ -Inf;

if $endpoint(|@end_tail) {$stop = 1;
@tail.push(value) unless $exclude_end; last; } } } elsif value ~~$endpoint {
$stop = 1; @tail.push(value) unless$exclude_end;
last;
}
@tail.push(value);
}

I leave you reading and understand this piece of code as an exercise, but for the given example, the @tail array will just contain two values: 3 and 5.

> .say for 3,5...11;
multi sub infix:<...>(\a, Mu \b)
List    # nqp::say(a.^name);
~~3     # nqp::say('~~' ~ value);
~~5     # nqp::say('~~' ~ value);
elems=2 # nqp::say('elems='~@tail.elems);
0=3     # nqp::say('0='~@tail[0]);
1=5     # nqp::say('1='~@tail[1]);

This output shows some debug data print outs that I added to the source code to see how it works. The green comments show the corresponding print instructions.

That’s it for today. See you tomorrow with more stuff from the sequence operator. Tomorrow, we have to understand how the list 3, 5 tells Perl 6 to generate increasing values with step 1.

## Perl 6 Inside Out: 🔬67. Redeclaration of a symbol in Perl 6

Today, we will see how Perl 6 helps to keep our programs better.

## Redeclaration of a variable

Examine the following program:

my $x = 1; my$x = 2;
say $x; You can immediately see that this program is not entirely correct. Either we meant to assign a new value to$x or to create a new variable with a different name. In either case, compiler has no idea and complains:

$perl6 redecl.pl Potential difficulties: Redeclaration of symbol '$x'
at /Users/ash/redecl.pl:2
------> my $x⏏ = 2; 2 You see a runtime warning, while the program does not stop. Let us find out where it happens in the source code. When you declare a variable, the grammar matches the corresponding text and calls the variable_declarator action method. It is quite compact but nevertheless I will not quote it completely. class Perl6::Actions is HLL::Actions does STDActions { . . . method variable_declarator($/) {
. . .
}

. . .
}

By the way, you can see here how Perl 6 treats a variable name:

 my $past :=$<variable>.ast;
my $sigil :=$<variable><sigil>;
my $twigil :=$<variable><twigil>;
my $desigilname := ~$<variable><desigilname>;
my $name :=$sigil ~ $twigil ~$desigilname;

The name of a variable is a concatenation of a sigil, a twigil and an identifier (which is called desigiled name in the code).

Then, if we’ve got a proper variable name, check it against an existing lexpad:

if $<variable><desigilname> { my$lex := $*W.cur_lexpad(); if$lex.symbol($name) {$/.typed_worry('X::Redeclaration', symbol => $name); } If the name is known, generate a warning. If everything is fine, create a variable declaration: make declare_variable($/, $past, ~$sigil, ~$twigil,$desigilname,
$<trait>,$<semilist>, :@post);

## Redeclaration of a routine

Now, let us try to re-create a subroutine:

sub f() {}
sub f() {}

This may only be OK if the subs are declared as multi-subs. With the given code, the program will not even compile:

===SORRY!=== Error while compiling /Users/ash/redecl.pl
Redeclaration of routine 'f' (did you mean to declare a multi-sub?)
at /Users/ash/redecl.pl:6
------> sub f() {}⏏<EOL>

This time, it happens in a much more complicated method, routine_def:

method routine_def($/) { . . . my$predeclared := $outer.symbol($name);
if $predeclared { my$Routine := $*W.find_symbol(['Routine'], :setting-only); unless nqp::istype($predeclared<value>, $Routine) && nqp::getattr_i($predeclared<value>, $Routine, '$!yada') {
$*W.throw($/, ['X', 'Redeclaration'],
symbol => ~$<deflongname>.ast, what => 'routine', ); } } ## The exception The code of the exception is rather simple. Here it is: my class X::Redeclaration does X::Comp { has$.symbol;
has $.postfix = ''; has$.what = 'symbol';
method message() {
"Redeclaration of $.what '$.symbol'"
~ (" $.postfix" if$.postfix)
~ (" (did you mean to declare a multi-sub?)" if $.what eq 'routine'); } } As you see, depending on the value of$.what, it prints either a short message or adds a suggestion to use the multi keyword.

## Perl 6 Inside Out: 🦋 66. Atomic operations in Perl 6

N. B. The examples below require a fresh Rakudo compiler, at least of the version 2017.09.

Discussing parallel computing earlier or later leads to solving race conditions. Let us look at a simple counter that is incremented by two parallel threads:

my $c = 0; await do for 1..10 { start {$c++ for 1 .. 1_000_000
}
}

say $c; If you run the program a few times, you will immediately see that the results are very different: $ perl6 atomic-1.pl
3141187
$perl6 atomic-1.pl 3211980$ perl6 atomic-1.pl
3174944
$perl6 atomic-1.pl 3271573 Of course, the idea was to increase the counter by 1 million in all of the ten threads, but about ⅓ of the steps were lost. It is quite easy to understand why that happens: the parallel threads read the variable and write to it ignoring the presence of other threads and not thinking that the value can be changed in-between. Thus, some of the threads work with an outdated value of the counter. Perl 6 offers a solution: atomic operations. The syntax of the language is equipped with the Atom Symbol (U+0x269B) character (no idea of why it is displayed in that purple colour). Instead of$c++, you should type $c++. my atomicint$c = 0;

await do for 1..10 {
start {
$c⚛++ for 1 .. 1_000_000 } } say$c;

And before thinking of the necessity to use a Unicode character, let us look at the result of the updated program:

$perl6 atomic-2.pl 10000000 This is exactly the result we wanted! Notice also, that the variable is declared as a variable of the atomicint type. That is a synonym for int, which is a native integer (unlike Int, which is a data type represented by a Perl 6 class). It is not possible to ask a regular value to be atomic. That attempt will be rejected by the compiler: $ perl6 -e'my $c;$c⚛++'
Expected a modifiable native int argument for '$target' in block at -e line 1 A few other operators can be atomic, for example, prefix and postfix increments and decrements ++ and --, or += and -=. There are also atomic versions of the assignment operator = and the one for reading: (sic!). If you need atomic operations in your code, you are not forced to use the character. There exists a bunch of alternative functions that you can use instead of the operators: my atomicint$c = 1;

my $x = ⚛$c;  $x = atomic-fetch($c);
$c ⚛=$x;     atomic-assign($c,$x);
$c⚛++; atomic-fetch-inc($c);
$c⚛--; atomic-fetch-dec($c);
++⚛$c; atomic-inc-fetch($c);
--⚛$c; atomic-dec-fetch($c);
$c ⚛+=$x;    atomic-fetch-add($c,$x);

say $x; # 1 say$c; # 3

## brrt to the future: Some things I want

Lately I've been fixing a few bugs here and there in the JIT compiler, as well as trying to advance the JIT expression optimizer. The story of those bugs is interesting but in this post I want to focus on something else, namely some support infrastructure that I think we should have that would make working on MoarVM and spesh/jit in particular much nicer.

There are a bunch of things related to runtime control of spesh and the JIT:
• jit-bisect.pl is an awesome tool (if I say so myself) that deserves it's own blog post. It can figure out which frame is the first to be miscompiled and cause an execution failure. (Or to be specific, the least miscompile to cause a failure). When a JIT issue is brought to me, I reach for it directly. It should be more powerful:
• Because it does not know in advance how many frames are compiled in a program, it uses an 'exponential probing' strategy to find the first breaking block, and then binary search between the last two probes. This means that we need to have log(n) tries to find the first failure during probing, and then another log(n) tries to find the exact frame. Usually, failures are much faster than successful runs, so the first phase takes a disproportionate amount of time. It would probably speed up bisects if we could start probing from a number higher than 1.
• It would be nice if as part of (optional) output, jit-bisect.pl could create a 'replicator' script that would replicate the failure exactly as the bisect script found it. E.g. if it finds that frame 237 block 6 fails, then it should output a script that executes the program with the expression JIT active to exactly that point.
• We assign sequence number 0 to the first frame to be JIT-compiled, but can't bisect that because we start with sequence number 1.
• Both spesh and JIT logging should be much more specific. Currently, when MVM_SPESH_LOG is defined, we log all information on all frames we encounter onto the log file, resulting in enormous (easily >20MB) text files. Of those text files, only tiny bits tend to be affected in a way that interests you, and especially as the result of bisect process, only the last frame will be of any interest. So I propose a MVM_SPESH_LOG_FOI (frame of interest) flag that toggles when a frame should be logged or not. Could be an array as well (comma- or semicolon-separated-values). Same goes for the JIT.
• We can limit the number of frames that passes through spesh using the flag MVM_SPESH_LIMIT, and the number of frames (and basic blocks) that pass through the expression JIT with MVM_JIT_EXPR_LAST_FRAME and MVM_JIT_EXPR_LAST_BB. We cannot do the same for the 'lego' JIT, and we probably should.
• When MVM_SPESH_BLOCKING is not defined, spesh runs concurrently with the main program, the result of MVM_SPESH_LOG is not repeatable, and MVM_SPESH_LIMIT has as a result no well-defined meaning. Although the jit-bisect.pl program will set MVM_SPESH_LIMIT, it is easy enough to forget. Maybe we should either disable MVM_SPESH_LIMIT without MVM_SPESH_BLOCKING, or MVM_SPESH_LIMIT should imply MVM_SPESH_BLOCKING.
• Generally we have too much ad-hoc environment setup code that - I feel - does not fit well into the idea of having MoarVM being embedded within another program. Not that embedding interpreters in other programs is a common thing to do anymore, but still. We should probably take that out of general instance setup and install it somewhere else. (MVM_spesh_setup_control_flags?)
Then there's more ambitious stuff, that still falls under 'housekeeping':
• Currently the expression tree is backed by an array of integers, some of which identify the nodes and some of which are internal pointers (relative to the base of the array). There is an additional info array of structures that has to be explicitly maintained and that holds some information (result type and size, mostly). This is rather annoying when operating on the tree structure (as the optimizer does) because both arrays have to be kept synchronized. Also, these info values are only defined for nodes of the tree. We can reasonably fit all information stored in an info node in 32 bits (if we're packing) or 64 bits (if we're sloppy), even together with the node value. Doing this should save memory, be better for caches, and make tree manipulation easier.
• I want to have explicit cleanup of the spesh worker thread (if requested by program arguments). Not having this makes ASAN and valgrind somewhat less useful, because they will falsely report in-progress memory as being leaked. To do this we'd need to wait, during shutdown, for the worker thread to be stopped, which means that we also need to have a signal to indicate that it has been stopped.
• Sometimes, the JIT can get into a confused state and panic, bringing down the process. This is great for debugging and awful for users. I feel like we might be able to improve this - we can, during JIT graph construction, bail out without crashing (if we find constructs we can't JIT yet). We may be able to extend and generalize this. (If we find that we're in an illegal state entirely, that should probably still crash).
• The legacy ('lego') JIT currently works with a two phase process, first building a JIT graph, then compiling it. This is redundant - without loss of functionality, the lego JIT could be reduced to a single phase (reducing the number of internal structures quite dramatically). This is also something that I'm not totally sure of whether we should do, because we may want having an explicit preparation phase eventually (e.g. when we start doing multiple basic blocks per expression tree).
And there's more ambitious stuff that would fall under optimizations and general functionality improvements:
• MoarVM in some places assumes that we have 'easy' access to the current instruction pointer, e.g. to lookup dynamic variables or the scope of the current CATCH block. This is logical for an interpreter but problematic for the JIT. We currently need to maintain a pointer to the last entered basic block to approximate it, i.e. a memory store per block. This is expensive, and it's not often needed:
• We already have to maintain a 'return pointer' for JIT frames that are high up the call stack, so in those cases the additional stores are redundant.
• In case we are interested in the current position of the current frame, the current position is already on the stack (as the return address of the function call), and we can read it from stack-walking.
• Alternatively, we can replace calls to functions that are interested in the current position to calls that receive that position explicitly. However, I kind of prefer using stack walking since we'll need the return pointer anyway. Also, it doesn't work for anything that might implicitly throw (MVM_exception_throw_adhoc)
• The register allocator can be much better in a bunch of (simple) ways:
• We never need to spill a constant value to memory, since we can always just insert a copy instead.
• It is probably not necessary to insert individual restores before usages in the same basic block, especially for usages prior to the spilling point.
• We sometimes spill values that we didn't really need to spill, since we're already storing them to memory as part of the basic block.
• We have a kind-of hacky way to ensure that our 3-instruction code (c = op(a,b)) is converted to x86 2-instruction code (b = op(b,a)). This should be something that the register allocator can solve, or a phase after the register allocator, but it is currently being handled during code generation.
There is definitively more out there I want to do, but this should be enough to keep me busy for a while. And if anyone else wants to take a try at any of these, they'd be very welcome to :-).

## Weekly changes in and around Perl 6: 2018.09 Say Cheese.d

Zoffix Znet has given an excellent insight into how Perl 6 developers introduce changes to the compiler and the language in his blog post On Specs, Versioning, Changes, and… Breakage (Reddit comments). It makes clear that over time, the story of Perl 6 will have fewer and fewer plot holes. A recommended read!

## Are you near Brno this Thursday?

Then that’s your change to see Jonathan Worthington give a presentation on Cro and Perl 6’s concurrency features. Wish I could be there!

## Rakudo Compiler Release 2018.02

The past week saw not one, but two Rakudo Perl 6 compiler releases. Aleks-Daniel Jakimenko-Aleksejev did all the hard work to do a 2018.02 as well as a 2018.02.1 release. The latter contains a hot fix for a boo-boo. Fortunately, all Linux packages have already been updated by Claudio Ramirez.

## Other Core Developments

As promised last week, an overview of developments of the past 2 weeks:

• Ticket status of past week and the week before that.
• Jonathan Worthington fixed a memory leak with one-shot timers, such as Promise.in(2). He also implemented a nqp::tryfindmethod op as an optimization to nqp::can and nqp::findmethod.
• Bart Wiegmans fixed an (apparently long-standing) problem with nqp::if in spesh: “It is amazing we got away with this hack for so long”.
• Samantha McVey worked on the strict decoding of the windows-1251 and windows-1252 encodings.
• Nick Logan added a nqp::getppid op to get the pid of the parent process.
• Timo Paulssen did some more preparations for the work on the multi-threaded profiler.
• Zoffix Znet fixed various issues with eof detection on zero-sized files on MoarVM. He also made the gist of a Backtrace more informative, and made the return value of Str.subst-mutate(:g) consistent whether or not the underlying match succeeded. On the efficiency front, he did some amazing performance improvements on Uni.list (15x to 653x). And he implemented IO::CatHandle.handles, which gives you a Seq of the IO::Handles of which it consists. And fixed an issue with the last statement of a for loop not being sunk.
• Tom Browder added a lot of tests and documentation to nqp. He also renamed the spew sub in nqp to spurt to align with how that functionality is called in Rakudo Perl 6. And he added a run-command sub to nqp.
• Christian Bartolomäus fixed various old and new issues specific to the JVM backend.
• Elizabeth Mattijsen made sure that Cool.subst-mutate will not actually convert the object to a Str if the underlying match failed. And she started working on converting all public only  subs to multi subs, allowing candidates to be added by developers in their programs without losing the built-in one.
• And many, many other smaller error fixes and improvements.

## Meanwhile on StackOverflow

• Andrew Shitov:

I wrote a “Blinking LED Hello World” in Perl 6 for Orange Pi today.

for True, !* ... * {
shell("gpio write $pin " ~ +$_);
sleep 0.5;
}

It is cool that you can use such a sequence True, !* ... *
*snip*
It would be nice to have fresh packages for major OSes beyond Windows and OS X listed on the official site.

## Winding Down

Plenty of excitement in the world of Perl 6 again this week. And yours truly feeling a bit better yet again. Looking forward to being able to do next week’s Perl 6 Weekly. Hope you are too. See you then!

## Zoffix Znet: Perl 6: On Specs, Versioning, Changes, and… Breakage

### Published on 2018-02-20T00:00:00

Details on how Perl 6 language changes.

## Weekly changes in and around Perl 6: 2018.08 Perl 6’ya Giriş

Yalın Pala has created a Turkish translation of Naoum Hankache‘s Perl 6 Introduction, which gives you a quick overview of the Perl 6 programming language, enough to get you up and running. This now brings the total of translations to 10: Bulgarian, Chinese, Dutch, French, German, Italian, Japanese, Portuguese, Spanish and now Turkish. A fine body of work!

## German Perl Workshop

The German Perl Workshop 2018 will be happening on 4-6 April, at the Campus Gummersbach of the Technische Hochschule Köln, followed by a Perl 6 Hackathon on the 7th of April. Although not yet on the program officially yet, there will be a full day Perl 6 workshop given as well. And the Call for Papers has been extended to the 10th of March! So this is the moment to propose your Perl 6 related presentations! Please check it out!

We have had a fine day in front of the Alhambra loving Perl. Several talks on Perl and Perl 6, including great introductions, history, Perl for Windows and programming IRC bots, concurrency in Perl 6, all in all, a great experience. It’s been the second conference we’ve had in Granada, won’t be the last one. Here’s the album I have created, you can also check the #love4perl hashtag in social networks.

## Curating and Improving Perl 6 Documentation

JJ Merelo has submitted a grant proposal for the improvement of the Perl 6 documentation. Please check out the discussion about the pro and cons of this proposal.

## More Perl 6 Performance and Reliability

The grant that Jonathan Worthington requested has been approved by the Board of Directors of the Perl Foundation. So expect some really cool things to happen in the coming weeks/months on top of the excellent work that so many others are already contributing to Rakudo Perl 6.

## UDP Datagram API, an RFC

Timo Paulssen would like to see comments on his proposal for an API providing source address and port of UDP datagrams. If you’d like to say anything about that, now is the time!

## Meanwhile on perl6-users

• Open files by given name and extension and ask for deletion by mimosinnet.
• Swiss Perl Workshop 2018 Call for Perl 6 Keynotes/Talks/Tutorials by Lee Johnson.

## Winding Down

Unfortunately, yours truly is still recovering from a bad cold. Enough to have a woolly head that is incapable of handling complex information. So no core developments section from me this week, will cover this week’s core developments next week. See you then!

## Weekly changes in and around Perl 6: 2018.07 A Quick One from Apopka

While recovering from the long-planned PR&R, yours truly got a bad cold. I guess heat, alcohol and air-conditioning don’t mix too well So a short Perl 6 Weekly this time, from the town of Apopka, although it feels a bit like being in Pawnee.

## New Bots

Aleks-Daniel Jakimenko-Aleksejev created two new IRC bots: Notable (for noting things, which is helping yours truly writing this already) and Shareable (for making builds of the Whateverable bot publicly available). By the way, the Whateverable repo saw its 500th commit!

## Better documentation

JJ Merelo has worked hard on the doc repository which, by the way, has now surpassed the 8000 commits mark! On Facebook, he said:

We are past the mark of the 800 issues closed in the perl6/doc repository. There’s still a lot of work to do, with 290 outstanding issues. Full disclosure here: I have applied for a Perl6 core grant to deal with this documentation.

## Cro Release 0.7.3

Cro released version 0.7.3, with as most notable changes:

• Support for HTTP/2.0 push promises (server and client side)
• HTTP session support
• body parser/serialization support in WebSockets
• a UI for manipulating inter-service links in cro web

It’s exciting to see these new developments making Cro the place to go to for implementing all sorts of web services.

## Blog Posts

• push-all optimisation of List.roll by Andrew Shitov.
• How does 0 but True work by Andrew Shitov.
• Dumping 0 but True by Andrew Shitov.
• Perl 6 is better CoffeeScript than CoffeeScript by ktown007.
• Colonpair in Perl 6’s Grammar, part 1 by Andrew Shitov.
• Colonpair in Perl 6’s Grammar, part 2 by Andrew Shitov.
• An attempt to understand how [*] works by Andrew Shitov.
• ## Other core developments

• After having done the Perl 6 Weekly last week, Zoffix Znet continued to be very busy: among many other things, he fixed the use of slurpies in if statements (aka if 42,43,44 -> *@a { }, sprintf on type objects, optimization on native pre/post increment/decrement, implemented support for .= to initialize sigilless variables, allow for parameterized constraints when initializing attributes with .= and generally optimized the dispatch of .=.
• Jeremy Studer removed an extranous push in code object creation.
• Jan-Olof Hendig spotted some missing deconts in cmp handling.
• Fernando Correa de Oliveira fixed Parameter.usage-name in the case that the name had a twigil.
• Stefan Seifert fixed an issue in multi-threaded pre-compilation of modules.
• And many other smaller changes and improvements.

## Winding Down

A bit shorter than usual, maybe. Please check again next week when yours truly has returned to her regularly scheduled programming. See you then!

## rakudo.org: Announce: Rakudo Star Release 2018.01

On behalf of the Rakudo and Perl 6 development teams, I’m pleased to
announce the January 2018 release of “Rakudo Star”, a useful and
usable production distribution of Rakudo Perl 6. The tarball for this

Binaries for macOS and Windows (64 bit) will shortly be available at
the same location.

This is a post-Christmas (production) release of Rakudo Star and
implements Perl v6.c. It comes with support for the MoarVM backend
(all module tests pass on supported platforms). Currently, Star is on
a quarterly release cycle.

Please note that this release of Rakudo Star is not fully functional
with the JVM backend from the Rakudo compiler. Please use the MoarVM
backend only.

In the Perl 6 world, we make a distinction between the language (“Perl
6”) and specific implementations of the language such as “Rakudo Perl
6”.

This Star release includes release 2018.01 of the Rakudo Perl 6
compiler, version 2018.01 MoarVM, plus various modules, documentation,
and other resources collected from the Perl 6 community.

The Rakudo compiler changes since the last Rakudo Star release of
2017.10 are now listed in “2017.11.md”, “2017.12.md”, and “2018.01.md”
under the “rakudo/docs/announce” directory of the source distribution.

Important Rakudo bug fixes are now listed at [Perl 6 Alerts]:

Deprecation:

+ “panda” has been removed from releases after 2017.10 since it is
+ LWP::Simple is deprecated and will be removed. Please use “HTTP::UserAgent”.

Notable changes in modules shipped with Rakudo Star:

+ JSON-Class: Alter the way in which the re-exporting of the traits works
+ JSON-Marshal: Fix for associative and positional type objects
+ Pod-To-HTML: Document P6DOC_DEBUG
+ datetime-parse: New. Dependency of http-useragent
+ doc: Too many to list.
+ http-useragent: New. Intended as replacement for LWP::Simple (now deprecated)
+ json_fast: fix off-by-one in treacherous escape detection
+ perl6-lwp-simple: HTML header names with mixed case fix
+ svg: Fix example in README
+ tap-harness6: Make TAP parsing loose by default before rakudo
2017.09 in prove too
+ zef: Warns about missing META6.json. Sort “list” output.

There are some key features of Perl 6 that Rakudo Star does not yet
handle appropriately, although they will appear in upcoming releases.
Some of the not-quite-there features include:

+ some bits of Synopsis 9 and 11

There is an online resource at http://perl6.org/compilers/features
that lists the known implemented and missing features of Rakudo’s
backends and other Perl 6 implementations.

In many places we’ve tried to make Rakudo smart enough to inform the
programmer that a given feature isn’t implemented, but there are many
that we’ve missed. Bug reports about missing and broken features are
welcomed at rakudobug@perl.org.

6, including documentation, example code, tutorials, presentations,
reference materials, design documents, and other supporting resources.
Some Perl 6 tutorials are available under the “docs” directory in the
release tarball.

The development team thanks all of the contributors and sponsors for
making Rakudo Star possible. If you would like to contribute, see

## 6guts: Of sisters, stacks, and CPAN

Recently, an open letter was published by Elizabeth Mattijsen, a long-time contributor to the Perl community who has in recent years contributed copiously to Rakudo Perl 6, as well as working to promote Perl (both 5 and 6) at numerous events. The letter made a number of points, some of them yielding decidedly unhappy responses. I’ve been asked a number of times by now for my take on the letter and, having had time to consider things, I’ve decided to write up my own thoughts on the issues at hand.

### Oh sister

A number of years back, I played a part in forming the “sister language narrative”. The reality – then and now – is that both Perl 5 and Perl 6 have userbases invested in them and a team willing to develop the languages and their implementations. These userbases partly overlap, and partly consist of people with an interest in only one or the other.

Following the Perl mantra that “there’s more than one way to do it”, I saw then – and see now – no reason for the advent of Perl 6 to impose an “end of life” on Perl 5. During the many years that Perl 6 took to converge on a stable language release with a production-ready implementation, Perl 5 continued to evolve to better serve its userbase. And again, I see no reason why it should not continue to do so. For one, Perl 6 is not a drop-in replacement for Perl 5, and it’s unreasonable to expect everyone using Perl 5 today to have a business case for migrating their existing systems to use Perl 6. When there’s a team eager to carry Perl 5 forwards, what’s to lose in continuing to work towards serving that Perl 5 userbase better? Caring about and supporting an existing userbase is a sign of a mature language development community. It shows that we’re not about “move fast and break stuff”, but “you can trust us with your stuff”.

Moreover, it’s very much the case that Perl 5 and Perl 6 make different trade-offs. To pick one concrete example, Perl 6 makes it easy to run code across multiple threads, and even uses multiple threads internally (for example, performing optimization and JIT compilation on a background thread). Which is great…except the only winning move in a game involving both threads and fork() is not to play. Sometimes one just can’t have their cake and eat it, and if you’re wanting a language that more directly gives you your POSIX, that’s probably always going to be a strength of Perl 5 over Perl 6.

For these reasons and more, it was clear to me that the best way forward was to simply accept, and rejoice in, the Perl community having multiple actively developed languages to offer the world. So where did the sister language narrative come in?

The number 6 happens to be a larger number than 5, and this carries some implications. I guess at the outset of the Perl 6 project, it was indeed imagined that Perl 6 would be a successor to Perl 5. By now, it’s instead like – if you’ll excuse me a beer analogy – Rochefort 8 and Rochefort 10: both excellent beers, from the same brewery, who have no reason to stop producing the 8 simply because they produce the 10. I buy both, and they’re obviously related, though different, and of course I have my preference, but I’m glad they both exist.

The point of the “sister language” narrative was to encourage those involved with Perl 5 and Perl 6 to acknowledge that both languages will continue to move forward, and to choose their words and actions so as to reflect this.

I continue to support this narrative, both in a personal capacity, and as the Rakudo Perl 6 Pumpking. (For those curious, “pumpking” is a cute Perl-y word for “project leader”, although one could be forgiven for guessing it means “flatulent monarch”.) Therefore, I will continue to choose my words and actions so as to support it, unless a new consensus with equally broad community buy-in comes to replace it.

I accept that this narrative may not be perfect for everyone, but it has been quite effective in encouraging those who feel strongly about just Perl 5 or just Perl 6 to focus their efforts constructively on building things, rather than trying to tear down the work of others. Therefore, it was no surprise to me that, when Liz’s open letter and follow-up comments went against that narrative, the consequences were anything but constructive.

I can’t, and don’t feel I should try to, control the views of others who contribute to Perl 6. Within reason, a diversity of views is part of a healthy community. I do, however, have a few things to ask of everyone. Firstly, when expressing a view that is known not to have widespread community consensus, please go to some lengths to make it clear it is a personal position. Secondly, when reading somebody’s expressed position on a matter, don’t assume it is anything more than their position unless clearly stated. And – perhaps most importantly – please also go to lengths to discuss topics calmly and politely. I was deeply disappointed by the tone of many of the discussions I saw taking place over the last week. We can do better.

### Perl 5 on the Rakudo stack?

The next section of the letter considered the possibility of a “Butterfly Perl 5” project: effectively, a port of Perl 5 to run atop of the Rakudo stack (in reality, this doesn’t involve Rakudo much at all, but rather NQP and MoarVM). As a compiler geek, this of course piques my interest, because what could be more fun that writing a compiler? :-) And before anyone gets excited – no, I don’t have the time to work on such a thing. But, in common with Liz, I’d be supportive of anyone wishing to experiment in that direction. There will be some deep challenges, and I’ll issue my usual warning that syntax is what gets all the attention but semantics are what will eat you alive.

Where I will disagree, however, is on the idea of a moratorium on new features in Perl 5. These appear slowly anyway (not because Perl 5 Porters are inefficient, but because adding new features to such a widely used language with a 30-year-old runtime is just a really darn hard thing to be doing). Given the immense technical challenges a Perl 5-on-new-VM effort would already face, the odd new feature would be a drop in the bucket.

My one piece of unsolicited advice to those working on Perl 5 would be to borrow features from Perl 6 very carefully, because they are generally predicated on a different type system and many other details that differ between the Perls. (My diagnosis of why smartmatch has presented such a challenge in Perl 5 is because it was designed assuming various aspects of the Perl 6 type system, which don’t map neatly to Perl 5. This isn’t surprising, given Perl 6 very consciously set out to do differently here.) But should Perl 5 simply stop exploring ways to make things better for is userbase? No, I don’t think so. From a Perl 5 user point of view (which is the only one I have), adding subroutine signatures – which deliver existing semantics with less boilerplate – feels like a sensible thing to do. And adding features to keep up with the needs of the latest versions of the Unicode specification is a no-brainer. “Perl (5 or 6) is great at Unicode” is a meme that helps us all.

### The CPAN butterfly plan

The final part of the letter proposes a focused porting effort of Perl 5 modules to Perl 6. This idea has my support. While the Perl 6 module ecosystem has been growing – and that’s wonderful – there’s still a good number of things missing. Efforts to address that expands the range of tasks to which Perl 6 can be conveniently applied. Of course, one can reach such modules already with the excellent Inline::Perl5 too. Some folks have reservations about dual runtimes in production, however, and interop between two runtimes has some marshalling cost – although with recent optimization work, the cost has been brought well down from what it was.

Of course, in some areas it’s strategic to build something afresh that takes into account the opportunities offered by Perl 6. That’s why I designed Cro by reading RFCs and considering the best way to implement them in Perl 6. Even then, I did it with the benefit of 20 years experience doing web stuff – without which, I suspect the result would have been rather less useful. Porting existing modules – taking time to tweak their APIs to feel more comfortable in Perl 6 – means that existing knowledge built up by the authors of those modules can be carried forward, which is surely a win.

### In closing

I’ve had the privilege of working with Liz for a number of years. While her open letter makes a number of points I disagree with, I have no reason to believe it wasn’t written out of good intentions. Some people have wondered if any good can come of it. That’s up to us as a community. I think we can, at least, take away:

• A reminder of the importance that the “sister language” narrative plays in our community. This should encourage all of us to – at the very least – quietly tolerate it.
• Further evidence that the only way that narrative will be replaced is by following the same path that created it: through quiet, thoughtful, discussions with all major stakeholders, to reach a consensus.
• That a Perl 5 atop of NQP/MoarVM effort – should anybody wish to try it – will receive some support. As I’ve noted, the technical challenges are considerable, and deep. But they were for implementing Perl 6 too, and we’ve managed that. :-)
• A much-needed injection of energy into increasing the number of modules in the Perl 6 ecosytem, taking inspiration from (and so hopefully avoiding mistakes already made and rectified by) Perl 5 modules.

And, last but not least, it has been clear that – while it has in the last days often been expressed in raw and heated ways – we, as the Perl community, have two languages we’re passionate about and are keen to drive forward. Let’s do that together, in peace, not in pieces.

## Zoffix Znet: Perl 6 Core Hacking: QASTalicious

### Published on 2018-01-26T00:00:00

Overview of "Q" Abstract Syntax Trees + bug fix tutorial

## Zoffix Znet: Long Live Perl 5!

### Published on 2018-01-19T00:00:00

Thoughts on the open letter to Perl community

## gfldex: Expensive Egg-Timers

If you use a CLI you might have done something along the line.

sleep 1m 30s; do-the-next-thing

I have a script called OK that will display a short text in a hopeful green and morse code O-K via the PC speaker. By doing so I turn my computer into an expensive egg-timer.

As of late I found myself waiting for longer periods of time and was missing a count-down so I could estimate how much more time I can waste playing computer games. The result is a program called count-down.

Since I wanted to mimic the behaviour of sleep as closely as possible I had a peek into its source-code. That made me realise how lucky I am to be allowed to use Perl 6. If I strip all the extra bits a count-down needs I’m at 33 lines of code compared to 154 lines of GNU sleep. The boilerplate I have is mostly for readability. Like defining a subset called Seconds and a Rexex called number.

Errors in the arguments to the script will be cought by the where clause in MAINs signature. Since there are no further multi candidates for MAIN that might interfere, the usage message will be displayed automatically if arguments are not recognized. Pretty much all lines in the C implementation deal with argument handling and the fact that they can’t trust their arguments until the last bit of handling is done. With a proper signature a Perl 6 Routine can fully trust its arguments and no further error handling is needed. Compared to the C version (that does a lot less) the code can be read linear from top to bottom and is much more expressive. After changing a few identifiers I didn’t feel the need for comments anymore. Even some unclear code like the splitting on numbers and keeping the values, becomes clear in the next lines where I sum up a list of seconds.

Now I can comfortably count down the rest of a year that was made much better by a made better Perl 6. I wish you all a happy 2018.

## Perl 6 Advent Calendar: Bonus Xmas – Concurrent HTTP Server implementation and the scripter’s approach

First of all, I want to highlight work with Rakudo Perl6 and IO::Socket::Async. Thanks Jon!

***

I like to make scripts; write well-organized sequences of actions, get results and do things with them.

When I began with Perl6 I discovered a spectacular ecosystem, where I could put my ideas into practice in the way that I like: script manner. One of these ideas was to implement a small HTTP server to play with it. Looking at other projects and modules related to Perl6, HTTP and sockets I discovered that the authors behind were programmers with a great experience with Object-Oriented programming.

Perl6 supports the three most popular programming paradigms:

• Object-Oriented
• Functional
• Procedural

I think that the Object-Oriented paradigm is fine when you design an application or service that will grow, will do many and varied things and will have many changes. But I don’t like things that grow too much and will have many changes; that’s why I like scripts, for its native procedural approach, because it promote simplicity and effectiveness quickly. I like small (step by step) things that do great things quickly.

The Functional paradigm is awesome in my opinion; you can take a function and use it like a var, among other amazings things.

# Perl6 Supplies are like a V12 engine

When I started with Perl6 shortly after I started the translation of perl6intro.com to Spanish language. Looking at the documentation of Perl6 I discovered the great concurrent potential that Perl6 has. The concurrent aspect of Perl6 was more powerful than I thought.

The idea I had of the HTTP server with Perl6 began with the Perl6 Supplies (Asynchronous data stream with multiple subscribers), specifically with the class IO::Socket::Async. All socket management, data transmission and concurrency is practically automatic and easy to understand. It was perfect for making and play with a small concurrent but powerful service.

Based on the examples of the IO::Socket::Async documentation I started to implement a small HTTP server with pseudoCGI support in the mini-http-cgi-server project, and it worked as I expected. As I got what I wanted, I was satisfied and I left this project for a while. I didn’t like things to grow too much.

But then, preparing a talk for the Madrid Perl Workshop 2017 (thanks to Madrid Perl Mongers and Barcelona Perl Mongers guys for the event support), I had enough motivation to do something more practical, something where web front-end coders could do their job well and communicate with the back-end where Perl6 is awaiting. On the one hand, the typical public html static structure, and on the other hand a Perl6 module including several webservices waiting for the web requests from the front-end guys.

Then Wap6 was born (Web App Perl6).

# The Wap6 structure

I like the structure for a web application that Wap6 implements:

• public
• webservices

public folder contains the friendly front-end stuff, like static html, javascript, css, etc., that is, the front-end developer space. The webservices folder contains the back-end stuff: a Perl6 module including a function per webservice.

This same folder level contains the solution entry point, a Perl6 script that, among other things like initialization server parameters, contains the mapping between routes and webservices:

my %webservices =
'/ws1' => ( &ws1, 'html' ),
'/ws2' => ( &ws2, 'json' )
;


As you can see, not only the routes are mapped to the corresponding webservice, but also specify the return content-type of the webservice (like HMTL or JSON). That is, you type http://domain/ws1 in the web browser and the ws1 function returns the response data with the corresponding content-type as we will see later.

All the routes to the webservices are in %webservices hash and it is passed to the main funcion wap with other useful named params:

wap(:$server-ip, :$server-port, :$default-html, :%webservices); # The core of Wap6 The wap funcion is located out side, in the core lib module that Wap6 use and contains the concurrent and elegant V12 engine: react { whenever IO::Socket::Async.listen($server-ip,$server-port) ->$conn {
whenever $conn.Supply(:bin) ->$buf {
my $response = response(:$buf, :$current-dir, :$default-html, :%webservices);
$conn.write:$response.encode('UTF-8');
$conn.close; } } }  This is a threes (react – whenever – IO::Socket::Async) reactive, concurrent and asynchronous context. When a transmission arrives from the web client ($conn), it is placed in a new Supply $buf of bin type ($conn.Suply(:bin)), and $buf with other things like the %webservices hash are sent to the response function that runs the HTTP logic. Finally, the return from the response function is written back to the web client. The response function (located out side, in the core lib too) contains the HTTP parser stuff: it splits the incoming data (the HTTP entity) into headers and body, it performs validations, it takes basic HTTP header information like the method (GET or POST) and the URI (Uniform Resource Identifier), it determines if the requested resource is a webservice (from the webservices folder) or static file (from the public folder), get the data from the resource (from static file or webservice) and returns back to wap function to write the response to the web client, as we have seen before. # The Webservices The response function, validates$buf and extract the HTTP method from the request header that can be GET or POST (I don’t think that in the future it will support more HTTP methods). Case of GET method it puts the URL params (if any) into $get-params. Case of POST method, it puts the body request into$body.

Then it’s time to check if the web client has requested a webservice. $get-params includes the URI and is extracted with the URI module, finally the result is placed in$path:

given $path { when %webservices{"$_"}:exists {
my ( &ws, $direct-type ) = %webservices{"$_"};
my $type = content-type(:$direct-type);
return response-headers(200, $type) ~ &ws(:$get-params, :$body); } .. }  If$path exists in the %webservices hash, the client wants a webservice. Then it extracts the corresponding webservice callable function &ws from %webservices hash (yes, I also love Functional paradigm :-) ) and the correspondig content-type. Then it calls the webservice function &ws with the $get-params and the request$body parameters. Finally it returns the HTTP response entity that concatenates:

• The response headers with the status HTTP 200 OK and the given content-type (from the content-type function).
• The webservice output.

The callable webservice &ws can be ws1, located in the Perl6 module from webservices folder:

sub ws1 ( :$get-params, :$body ) is export {
if $get-params { return 'From ws1: ' ~$get-params; }
if $body { return 'From ws1: ' ~$body; }
}


In this demo context the webservice simply returns the input, that is, the $get-params (when GET) or the$body (when POST).

# When the client request a static file

After discarding all the other possibilities, if the client request a static file hosted in the public folder, like html, js, css, etc, then:

given $path { .. default { my$filepath = "$current-dir/public/$path";
my $type = content-type(:$filepath);
return response-headers(200, $type) ~ slurp "$current-dir/public/$path"; } }  It returns the response headers including the matched content-type and the requested file contents with slurp. And that’s all folks! a concurrent web server in the script-procedural manner: Wap6. # Epilogue I’m happy with the results of Wap6. I don’t pretend that it grows a lot, but I’m always tempted to continue adding more features: SSL support (completed!), session management (in progress), cookies, file uploads, etc. Perl6 has put on the table a very powerful way to perform concurrent network operations: IO::Socket::Async, a masterpiece. Also, with Perl6 you can mix the Object-Oriented, Procedural and Functional paradigms as you wish. With these capabilities you can design a concurrent asynchronous service and implement it quickly. If you want something more serious approach with HTTP services and concurrency in the Perl6 ecosystem, take a look at Cro, it represents a great opportunity to establish Perl6 as a powerful entity in the HTTP services space. Jonathan Worthington wrote about it last 9th on this same Advent Calendar. Meanwhile, I will continue playing with Wap6, in the script manner, contributing with the Perl6 ecosystem and learning from the bests coders in the world, I mean: Perl and Perl6 coders, of course :-) ## Perl 6 Advent Calendar: Day 24 – Solving a Rubik’s Cube ### Published by coke on 2017-12-24T00:33:51 # Intro I have a speed cube on my wish list for Christmas, and I'm really excited about it. :) I wanted to share that enthusiasm with some Perl 6 code. I graduated from high school in '89, so I'm just the right age to have had a Rubik's cube through my formative teen years. I remember trying to show off on the bus and getting my time down to just under a minute. I got a booklet from a local toy store back in the 80s that showed an algorithm on how to solve the cube, which I memorized. I don't have the booklet anymore. I've kept at it over the years, but never at a competitive level. In the past few months, YouTube has suggested a few cube videos to me based on my interest in the standupmaths channel; seeing the world record come in under 5 seconds makes my old time of a minute seem ridiculously slow. Everyone I've spoken to who can solve the cube has been using a different algorithm than I learned, and the one discussed on standupmaths is yet a different one. The advanced version of this one seems to be commonly used by those who are regularly setting world records, though. Picking up this algorithm was not too hard; I found several videos, especially one describing how to solve the last layer. After doing this for a few days, I transcribed the steps to a few notes showing the list of steps, and the crucial parts for each step: desired orientation, followed by the individual turns for that step. I was then able to refer to a single page of my notebook instead of a 30-minute video, and after a few more days, had memorized the steps: being able to go from the notation to just doing the moves is a big speed up. After a week, I was able to solve it reliably using the new method in under two minutes; a step back, but not bad for a week's effort in my off hours. Since then (a few weeks now), I've gotten down to under 1:20 pretty consistently. Again, this is the beginner method, without any advanced techniques, and I'm at the point where I can do the individual algorithm steps without looking at the cube. (I still have a long way to go to be competitive though.) # Notation A quick note about the notation for moves – given that you're holding the cube with a side on the top, and one side facing you, the relative sides are: L (Left) R (Right) U (Up) D (Down) F (Front) B (Back) If you see a lone letter in the steps, like B, that means to turn that face clockwise (relative to the center of the cube, not you). If you add a ʼ to the letter, that means counter clockwise, so would have the top piece coming down, while a R would have the bottom piece coming up. Additionally, you might have to turn a slice twice, which is written as U2; (Doesn't matter if it's clockwise or not, since it's 180º from the starting point.) # Algorithm The beginner's algorithm I'm working with has the following basic steps: 1. White cross 2. White corners 3. Second layer 4. Yellow cross 5. Yellow edges 6. Yellow corners 7. Orient yellow corners If you're curious as to what the individual steps are in each, you'll be able to dig through the Rubik's wiki or the YouTube video linked above. More advanced versions of this algorithm (CFOP by Jessica Fridrich) allow you to combine steps, have specific "shortcuts" to deal with certain cube states, or solve any color as the first side, not just white. # Designing a Module As I began working on the module, I knew I wanted to get to a point where I could show the required positions for each step in a way that was natural to someone familiar with the algorithm, and to have the individual steps also be natural, something like:  F.R.U.Rʼ.Uʼ.Fʼ  I also wanted to be able to dump the existing state of the cube; For now as text, but eventually being able to tie it into a visual representation as well, We need to be able to tell if the cube is solved; We need to be able to inspect pieces relative to the current orientation, and be able to change our orientation. Since I was going to start with the ability to render the state of the cube, and then quickly add the ability to turn sides, I picked an internal structure that made that fairly easy. # The Code The latest version of the module is available on github. The code presented here is from the initial version. Perl 6 lets you create Enumerations so you can use actual words in your code instead of lookup values, so let's start with some we'll need:  enum Side «:Up('U') :Down('D') :Front('F') :Back('B') :Left('L') :Right('R')»; enum Colors «:Red('R') :Green('G') :Blue('B') :Yellow('Y') :White('W') :Orange('O')»;  With this syntax, we can use Up directly in our code, and its associated value is U. We want a class so we can store attributes and have methods, so our class definition has:  class Cube::Three { has %!Sides; ... submethod BUILD() { %!Sides{Up} = [White xx 9]; %!Sides{Front} = [Red xx 9]; ... } }  We have a single attribute, a Hash called %.Sides; Each key corresponds to one of the Enum sides. The value is a 9-element array of Colors. Each element on the array corresponds to a position on the cube. With white on top and red in front as the default, the colors and cell positions are shown here with the numbers & colors. (White is Up, Red is Front)  W0 W1 W2 W3 W4 W5 W6 W7 W8 G2 G5 G8 R2 R5 R8 B2 B5 B8 O2 O5 O8 G1 G4 G7 R1 R4 R7 B1 B4 B7 O1 O4 O7 G0 G3 G6 R0 R3 R6 B0 B3 B6 B0 B3 B6 Y0 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8  The first methods I added were to do clockwise turns of each face.  method F { self!rotate-clockwise(Front); self!fixup-sides([ Pair.new(Up, [6,7,8]), Pair.new(Right, [2,1,0]), Pair.new(Down, [2,1,0]), Pair.new(Left, [6,7,8]), ]); self; }  This public method calls two private methods (denoted with the !); one rotates a single Side clockwise, and the second takes a list of Pairs, where the key is a Side, and the value is a list of positions. If you imagine rotating the top of the cube clockwise, you can see that the positions are being swapped from one to the next. Note that we return self from the method; this allows us to chain the method calls as we wanted in the original design. The clockwise rotation of a single side shows a raw Side being passed, and uses array slicing to change the order of the pieces in place.  # 0 1 2 6 3 0 # 3 4 5 -> 7 4 1 # 6 7 8 8 5 2 method !rotate-clockwise(Side \side) { %!Sides{side}[0,1,2,3,5,6,7,8] = %!Sides{side}[6,3,0,7,1,8,5,2]; }  To add the rest of the notation for the moves, we add some simple wrapper methods:  method F2 { self.F.F; } method Fʼ { self.F.F.F; }  F2 just calls the move twice; Fʼ cheats: 3 rights make a left. At this point, I had to make sure that my turns were doing what they were supposed to, so I added a gist method (which is called when an object is output with say).  say Cube::Three.new.U2.D2.F2.B2.R2.L2;   W Y W Y W Y W Y W G B G R O R B G B O R O B G B O R O G B G R O R G B G R O R B G B O R O Y W Y W Y W Y W Y  The source for the gist is:  method gist { my$result;

$result = %!Sides{Up}.rotor(3).join("\n").indent(6);$result ~= "\n";

for 2,1,0 -> $row { for (Left, Front, Right, Back) ->$side {

my @slice = (0,3,6) >>+>> $row;$result ~= ~%!Sides{$side}[@slice].join(' ') ~ ' '; }$result ~= "\n";

}

$result ~= %!Sides{Down}.rotor(3).join("\n").indent(6);$result;

}



A few things to note:

• use of .rotor(3) to break up the 9-cell array into 3 3-element lists.

• .indent(6) to prepend whitespace on the Up and Down sides.
• (0,3,6) >>+>> $row, which increments each value in the list The gist is great for stepwise inspection, but for debugging, we need something a little more compact:  method dump { gather for (Up, Front, Right, Back, Left, Down) ->$side {

take %!Sides{$side}.join(''); }.join('|'); }  This iterates over the sides in a specific order, and then uses the gather take syntax to collect string representations of each side, then joining them all together with a |. Now we can write tests like:  use Test; use Cube::Three; my$a = Cube::Three.new();

is $a.R.U2.Rʼ.Uʼ.R.Uʼ.Rʼ.Lʼ.U2.L.U.Lʼ.U.L.dump, 'WWBWWWWWB|RRRRRRRRW|BBRBBBBBO|OOWOOOOOO|GGGGGGGGG|YYYYYYYYY', 'corners rotation';  This is actually the method used in the final step of the algorithm. With this debug output, I can take a pristine cube, do the moves myself, and then quickly transcribe the resulting cube state into a string for testing. While the computer doesn't necessarily need to rotate the cube, it will make it easier to follow the algorithm directly if we can rotate the cube, so we add one for each of the six possible turns, e.g.:  method rotate-F-U { self!rotate-clockwise(Right); self!rotate-counter-clockwise(Left); # In addition to moving the side data, have to # re-orient the indices to match the new side. my$temp = %!Sides{Up};

%!Sides{Up}    = %!Sides{Front};

self!rotate-counter-clockwise(Up);

%!Sides{Front} = %!Sides{Down};

self!rotate-clockwise(Front);

%!Sides{Down}  = %!Sides{Back};

self!rotate-clockwise(Down);

%!Sides{Back}  = $temp; self!rotate-counter-clockwise(Back); self; }  As we turn the cube from Front to Up, we rotate the Left and Right sides in place. Because the orientation of the cells changes as we change faces, as we copy the cells from face to face, we also may have to rotate them to insure they end up facing in the correct direction. As before, we return self to allow for method chaining. As we start testing, we need to make sure that we can tell when the cube is solved; we don't care about the orientation of the cube, so we verify that the center color matches all the other colors on the face:  method solved { for (Up, Down, Left, Right, Back, Front) ->$side {

return False unless

%!Sides{$side}.all eq %!Sides{$side}[4];

}

return True;

}



For every side, we use a Junction of all the colors on a side to compare to the center cell (always position 4). We fail early, and then succeed only if we made it through all the sides.

Next I added a way to scramble the cube, so we can consider implementing a solve method.



method scramble {

my @random = <U D F R B L>.roll(100).squish[^10];

for @random -> $method { my$actual = $method ~ ("", "2", "ʼ").pick(1); self."$actual"();

}

}



This takes the six base method names, picks a bunch of random values, then squishes them (insures that there are no dupes in a row), and then picks the first 10 values. We then potentially add on a 2 or a ʼ. Finally, we use the indirect method syntax to call the individual methods by name.

Finally, I'm ready to start solving! And this is where things got complicated. The first steps of the beginner method are often described as intuitive. Which means it's easy to explain… but not so easy to code. So, spoiler alert, as of the publish time of this article, only the first step of the solve is complete. For the full algorithm for the first step, check out the linked github site.



method solve {

self.solve-top-cross;

}

method solve-top-cross {

sub completed {

%!Sides{Up}[1,3,5,7].all eq 'W' &&

%!Sides{Front}[5] eq 'R' &&

%!Sides{Right}[5] eq 'B' &&

%!Sides{Back}[5]  eq 'O' &&

%!Sides{Left}[5]  eq 'G';

}

...

MAIN:

while !completed() {

# Move white-edged pieces in second row up to top

# Move incorrectly placed pieces in the top row to the middle

# Move pieces from the bottom to the top

}

}



Note the very specific checks to see if we're done; we use a lexical sub to wrap up the complexity – and while we have a fairly internal check here, we see that we might want to abstract this to a point where we can say "is this edge piece in the right orientation". To start with, however, we'll stick with the individual cells.

The guts of solve-top-cross are 100+ lines long at the moment, so I won't go through all the steps. Here's the "easy" section



my @middle-edges =

[Front, Right],

[Right, Back],

[Back,  Left],

[Left,  Front],

;

for @middle-edges -> $edge { my$side7 = $edge[0]; my$side1 = $edge[1]; my$color7 = %!Sides{$side7}[7]; my$color1 = %!Sides{$side1}[1]; if$color7 eq 'W' {

# find number of times we need to rotate the top:

my $turns = ( @ordered-sides.first($side1, :k) -

@ordered-sides.first(%expected-sides{~$color1}, :k) ) % 4; self.U for 1..$turns;

self."$side1"(); self.Uʼ for 1..$turns;

next MAIN;

} elsif $color1 eq 'W' { my$turns = (

@ordered-sides.first($side7, :k) - @ordered-sides.first(%expected-sides{~$color7}, :k)

) % 4;

self.Uʼ for 1..$turns; self."$side1"();

self.U for 1..$turns; next MAIN; } }  When doing this section on a real cube, you'd rotate the cube without regard to the side pieces, and just get the cross in place. To make the algorithm a little more "friendly", we keep the centers in position for this; we rotate the Up side into place, then rotate the individual side into place on the top, then rotate the Up side back into the original place. One of the interesting bits of code here is the .first(..., :k) syntax, which says to find the first element that matches, and then return the position of the match. We can then look things up in an ordered list so we can calculate the relative positions of two sides. Note that the solving method only calls to the public methods to turn the cube; While we use raw introspection to get the cube state, we only use "legal" moves to do the solving. With the full version of this method, we now solve the white cross with this program:  #!/usr/bin/env perl6 use Cube::Three; my$cube = Cube::Three.new();

$cube.scramble; say$cube;

say '';

$cube.solve; say$cube;



which generates this output given this set of moves (Fʼ L2 B2 L Rʼ Uʼ R Fʼ D2 B2). First is the scramble, and then is the version with the white cross solved.

      W G G
Y W W
Y Y Y
O O B R R R G B O Y Y B
R G O B R R G B G W O B
Y B B R O W G G G W W O
W W O
Y Y O
B R R

Y W W
W W W
G W R
O G W O R Y B B G R O G
Y G G R R B R B Y R O G
O O R Y O W O O R W Y B
G G B
B Y Y
Y B B


This sample prints out the moves used to do the scramble, shows the scrambled cube, "solves" the puzzle (which, as of this writing, is just the white cross), and then prints out the new state of the cube.

Note that as we get further along, the steps become less "intuitive", and, in my estimation, much easier to code. For example, the last step requires checking the orientationof four pieces, rotating the cube if necessary, and then doing a 14-step set of moves. (shown in the test above).

Hopefully my love of cubing and Perl 6 have you looking forward to your next project!

I'll note in the comments when the module's solve is finished, for future readers.

## Perl 6 Advent Calendar: Day 23 – The Wonders of Perl 6 Golf

Ah, Christmas! What could possibly be better than sitting around the table with your friends and family and playing code golf! … Wait, what?

Oh, right, it’s not Christmas yet. But you probably want to prepare yourself for it anyway!

If you haven’t noticed already, there’s a great website for playing code golf: https://code-golf.io/. The cool thing about it is that it’s not just for perl 6! At the time of writing, 6 other langs are supported. Hmmm…

Anyway, as I’ve got some nice scores there, I thought I’d share some of the nicest bits from my solutions. All the trickety-hackety, unicode-cheatery and mind-blowety. While we are at it, maybe we’ll even see that perl 6 is quite concise and readable even in code golf. That is, if you have a hard time putting your Christmas wishes on a card, maybe a line of perl 6 code will do.

I won’t give full solutions to not spoil your Christmas fun, but I’ll give enough hints for you to come up with competitive solutions.

All I want for Christmas is for you to have some fun. So get yourself rakudo to make sure you can follow along. Later we’ll have some pumpkin pie and we’ll do some caroling. If you have any problems running perl 6, perhaps join #perl6 channel on freenode to get some help. That being said, https://code-golf.io/ itself gives you a nice editor to write and eval your code, so there should be no problem.

## Some basic examples

Let’s take Pascal’s Triangle task as an example. I hear ya, I hear! Math before Christmas, that’s cruel. Cruel, but necessary.

There’s just one basic trick you have to know. If you take any row from the Pascal’s Triangle, shift it by one element and zip-sum the result with the original row, you’ll get the next row!

So if you had a row like:



1 3 3 1



All you do is just shift it to the right:



0 1 3 3 1



And sum it with the original row:



1 3 3 1

+ + + +

0 1 3 3 1

=

1 4 6 4 1



As simple as that! So let’s write that in code:



for ^16 { put (+combinations($^row,$_) for 0..$row) }  You see! Easy! … oh… Wait, that’s a completely different solution. OK, let’s see:  .put for 1, { |$_,0 Z+ 0,|$_ } … 16  Output:  1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 1 7 21 35 35 21 7 1 1 8 28 56 70 56 28 8 1 1 9 36 84 126 126 84 36 9 1 1 10 45 120 210 252 210 120 45 10 1 1 11 55 165 330 462 462 330 165 55 11 1 1 12 66 220 495 792 924 792 495 220 66 12 1 1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1 1 14 91 364 1001 2002 3003 3432 3003 2002 1001 364 91 14 1 1 15 105 455 1365 3003 5005 6435 6435 5005 3003 1365 455 105 15 1  Ah-ha! There we go. So what happened there? Well, in perl 6 you can create sequences with a very simple syntax: 2, 4, 8 … ∞. Normally you’ll let it figure out the sequence by itself, but you can also provide a code block to calculate the values. This is awesome! In other languages you’d often need to have a loop with a state variable, and here it does all that for you! This feature alone probably needs an article or 𝍪. The rest is just a for loop and a put call. The only trick here is to understand that it is working with lists, so when you specify the endpoint for the sequence, it is actually checking for the number of elements. Also, you need to flatten the list with |. If you remove whitespace and apply all tricks mentioned in this article, this should get you to 26 characters. That’s rather competitive. Similarly, other tasks often have rather straightforward solutions. For example, for Evil Numbers you can write something like this:  .base(2).comb(~1) %% 2 && .say for ^50  Remove some whitespace, apply some tricks, and you’ll be almost there. Let’s take another example: Pangram Grep. Here we can use set operators:  ‘a’..‘z’ ⊆ .lc.comb && .say for @*ARGS  Basically, almost all perl 6 solutions look like real code. It’s the extra -1 character oomph that demands extra eye pain, but you didn’t come here to listen about conciseness, right? It’s time to get dirty. ## Numbers Let’s talk numbers! 1 ² ③ ٤ ⅴ ߆… *cough*. You see, in perl 6 any numeric character (that has a corresponding numeric value property) can be used in the source code. The feature was intended to allow us to have some goodies like ½ and other neat things, but this means that instead of writing 50 you can write ㊿. Some golfing platforms will count the number of bytes when encoded in UTF-8, so it may seem like you’re not winning anything. But what about 1000000000000 and 𖭡? In any case, code-golf.io is unicode-aware, so the length of any of these characters will be 1. So you may wonder, which numbers can you write in that manner? There you go:  -0.5 0.00625 0.025 0.0375 0.05 0.0625 0.083333 0.1 0.111111 0.125 0.142857 0.15 0.166667 0.1875 0.2 0.25 0.333333 0.375 0.4 0.416667 0.5 0.583333 0.6 0.625 0.666667 0.75 0.8 0.833333 0.875 0.916667 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 60 70 80 90 100 200 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 200000 216000 300000 400000 432000 500000 600000 700000 800000 900000 1000000 100000000 10000000000 1000000000000  This means, for example, that in some cases you can save 1 character when you need to negate the result. There are many ways you can use this, and I’ll only mention one particular case. The rest you figure out yourself, as well as how to find the actual character that can be used for any particular value (hint: loop all 0x10FFFF characters and check their .univals). For example, when golfing you want to get rid of unnecessary whitespace, so maybe you’ll want to write something like:  say 5max3 # ERROR  It does not work, of course, and we can’t really blame the compiler for not untangling that mess. However, check this out:  say ⑤max③ # OUTPUT: «5␤»  Woohoo! This will work in many other cases. ## Conditionals If there is a good golfing language, that’s not Perl 6. I mean, just look at this:  puts 10<30?1:2 # ruby   say 10 <30??1!!2 # perl 6  Not only TWO more characters are needed for the ternary, but also some obligatory whitespace around < operator! What’s wrong with them, right? How dare they design a language with no code golf in mind Well, there are some ways we can work around it. One of them is operator chaining. For example:  say 5>3>say(42)  If 5 is ≤ than 3, then there’s no need to do the other comparison, so it won’t run it. This way we can save at least one character. On a slightly related note, remember that junctions may also come in handy:  say ‘yes!’ if 5==3|5  And of course, don’t forget about unicode operators: ≥, ≤, ≠. ## Typing is hard, let’s use some of the predefined strings! You wouldn’t believe how useful this is sometimes. Want to print the names of all chess pieces? OK:  say (‘♔’…‘♙’)».uniname».words»[2] # KING QUEEN ROOK BISHOP KNIGHT PAWN  This saves just a few characters, but there are cases when it can halve the size of your solution. But don’t stop there, think of error messages, method names, etc. What else can you salvage? ## Base 16? Base 36? Nah, Base 0x10FFFF! One of the tasks tells us to print φ to the first 1000 decimal places. Well, that’s very easy!  say ‘1.6180339887498948482045868343656381177203091798057628621354486227052604628189024497072072041893911374847540880753868917521266338622235369317931800607667263544333890865959395829056383226613199282902678806752087668925017116962070322210432162695486262963136144381497587012203408058879544547492461856953648644492410443207713449470495658467885098743394422125448770664780915884607499887124007652170575179788341662562494075890697040002812104276217711177780531531714101170466659914669798731761356006708748071013179523689427521948435305678300228785699782977834784587822891109762500302696156170025046433824377648610283831268330372429267526311653392473167111211588186385133162038400522216579128667529465490681131715993432359734949850904094762132229810172610705961164562990981629055520852479035240602017279974717534277759277862561943208275051312181562855122248093947123414517022373580577278616008688382952304592647878017889921990270776903895321968198615143780314997411069260886742962267575605231727775203536139362’  Yes!!! Okay, that takes a bit more than 1000 characters… Of course, we can try to calculate it, but that is not exactly in the Christmas spirit. We want to cheat. If we look at the docs about polymod, there’s a little hint:  my @digits-in-base37 = 9123607.polymod(37 xx *); # Base conversion  Hmmm… so that gives us digits for any arbitrary base. How high can we go? Well, it depends on what form we would like to store the number in. Given that code-golf.io counts codepoints, we can use base 0x10FFFF (i.e. using all available codepoints). Or, in this case we will go with base 0x10FFFE, because:  WARNING! WARNING! WARNING! THIS WILL MAKE YOUR COMPUTER IMPLODE! UNICODE STRINGS ARE SUBJECT TO NORMALIZATION SO YOUR DATA WILL NOT BE PRESERVED. HIDE YOUR KIDS, HIDE YOUR WIFE. HIDE YOUR KIDS, HIDE YOUR WIFE. HIDE YOUR KIDS, HIDE YOUR WIFE. AND HIDE YOUR HUSBAND. WARNING! WARNING! WARNING!  When applied to our constant, it should give something like this:  󻁾񤍠򷒋󜹕󘶸񙦅񨚑򙯬񗈼𢍟𪱷򡀋𢕍򌠐񘦵𔇆򅳒򑠒󌋩򯫞򶝠򚘣򣥨񫵗𿞸􋻩񱷳󟝐󮃞󵹱񿢖𛒕𺬛󊹛󲝂򺗝𭙪񰕺𝧒򊕆𘝞뎛􆃂򊥍񲽤򩻛󂛕磪󡯮끝򰯬󢽈󼿶󘓥򮀓񽑖򗔝󃢖񶡁􁇘󶪼񌍌񛕄񻊺򔴩寡񿜾񿸶򌰘񡇈򦬽𥵑󧨑򕩃򳴪񾖾򌯎󿥐񱛦𱫞𵪶򁇐󑓮򄨠򾎹𛰑𗋨䨀򡒶𰌡򶟫񦲋𧮁􍰍񲍚񰃦𦅂󎓜󸾧󉦩󣲦򄉼񿒣𸖉񿡥󬯞嗟𧽘񿷦򠍍🼟򇋹񖾷𖏕񟡥󜋝􋯱񤄓򭀢򌝓𱀉𫍡󬥝򈘏򞏡񄙍𪏸࿹𺐅񢻳򘮇𐂇񘚡ந򾩴󜆵𰑕򰏷񛉿򢑬򭕴𨬎󴈂􋵔򆀍񖨸􂳚󽡂󎖪񡉽񕧣񎗎򝤉򡔙񆔈󖾩󅾜񋩟򝼤񯓦󐚉񟯶򄠔𦔏򲔐o  How do we reverse the operation? During one of the squashathons I found a ticket about a feature that I didn’t know about previously. Basically, the ticket says that Rakudo is doing stuff that it shouldn’t, which is of course something we will abuse next time. But for now we’re within the limits of relative sanity:  say ‘1.’,:1114110[‘o򲔐𦔏򄠔񟯶󐚉񯓦򝼤񋩟󅾜󖾩񆔈򡔙򝤉񎗎񕧣񡉽󎖪󽡂􂳚񖨸򆀍􋵔󴈂𨬎򭕴򢑬񛉿򰏷𰑕󜆵򾩴ந񘚡𐂇򘮇񢻳𺐅࿹𪏸񄙍򞏡򈘏󬥝𫍡𱀉򌝓򭀢񤄓􋯱󜋝񟡥𖏕񖾷򇋹🼟򠍍񿷦𧽘嗟󬯞񿡥𸖉񿒣򄉼󣲦󉦩󸾧󎓜𦅂񰃦񲍚􍰍𧮁񦲋򶟫𰌡򡒶䨀𗋨𛰑򾎹򄨠󑓮򁇐𵪶𱫞񱛦󿥐򌯎񾖾򳴪򕩃󧨑𥵑򦬽񡇈򌰘񿸶񿜾寡򔴩񻊺񛕄񌍌󶪼􁇘񶡁󃢖򗔝񽑖򮀓󘓥󼿶󢽈򰯬끝󡯮磪󂛕򩻛񲽤򊥍􆃂뎛𘝞򊕆𝧒񰕺𭙪򺗝󲝂󊹛𺬛𛒕񿢖󵹱󮃞󟝐񱷳􋻩𿞸񫵗򣥨򚘣򶝠򯫞󌋩򑠒򅳒𔇆񘦵򌠐𢕍򡀋𪱷𢍟񗈼򙯬񨚑񙦅󘶸󜹕򷒋񤍠󻁾’.ords]  Note that the string has to be in reverse. Other than that it looks very nice. 192 characters including the decoder. This isn’t a great idea for printing constants that are otherwise computable, but given the length of the decoder and relatively dense packing rate of the data, this comes handy in other tasks. ## All good things must come to an end; horrible things – more so That’s about it for the article. For more code golf tips I’ve started this repository: https://github.com/AlexDaniel/6lang-golf-cheatsheet Hoping to see you around on https://code-golf.io/! Whether using perl 6 or not, I’d love to see all of my submissions beaten. ## Perl 6 Advent Calendar: Day 22 – Features of Perl 6.d ### Published by liztormato on 2017-12-22T00:00:49 So there we are. Two years after the first official release of Rakudo Perl 6. Or 6.c to be more precise. Since Matt Oates already touched on the performance improvements since then, Santa thought to counterpoint this with a description of the new features for 6.d that have been implemented since then. Because there have been many, Santa had to make a selection. ## Tweaking objects at creation Any class that you create can now have a TWEAK method. This method will be called after all other initializations of a new instance of the class have been done, just before it is being returned by .new. A simple, bit contrived example in which a class A has one attribute, of which the default value is 42, but which should change the value if the default is specified at object creation: class A { has$.value = 42;
method TWEAK(:$value = 0) { # default prevents warning # change the attribute if the default value is specified$!value = 666 if $value ==$!value;
}
}
# no value specified, it gets the default attribute value
dd A.new;              # A.new(value => 42)

# value specified, but it is not the default
dd A.new(value => 77); # A.new(value => 77)

# value specified, and it is the default
dd A.new(value => 42); # A.new(value => 666)

## Concurrency Improvements

The concurrency features of Rakudo Perl 6 saw many improvements under the hood. Some of these were exposed as new features. Most prominent are Lock::Async (a non-blocking lock that returns a Promise) and atomic operators.

In most cases, you will not need to use these directly, but it is probably good that you know about atomic operators if you’re engaged in writing programs that use concurrency features. An often occurring logic error, especially if you’ve been using threads in Pumpking Perl 5, is that there is no implicit locking on shared variables in Rakudo Perl 6. For example:

   my int $a; await (^5).map: { start { ++$a for ^100000 }
}
say $a; # something like 419318 So why doesn’t that show 500000? The reason for this is that we had 5 threads that were incrementing the same variable at the same time. And since incrementing consists of a read step, an increment step and write step, it became very easy for one thread to do the read step at the same time as another thread. And thus losing an increment. Before we had atomic operators, the correct way of doing the above code would be:  my int$a;
my $l = Lock.new; await (^5).map: { start { for ^100000 {$l.protect( { ++$a } ) } } } say$a; # 500000

This would give you the correct answer, but would be at least 20x as slow.

Now that we have atomic variables, the above code becomes:

   my atomicint $a; await (^5).map: { start { ++⚛$a for ^100000 }
}
say $a; # 500000 Which is very much like the original (incorrect) code. And this is at least 6x as fast as the correct code using Lock.protect. ## Unicode goodies So many, so many. For instance, you can now use ≤, ≥, ≠ as Unicode versions of <=, >= and != (complete list). You can now also create a grapheme by specifying the Unicode name of the grapheme, e.g.: say "BUTTERFLY".parse-names; # 🦋 or create the Unicode name string at runtime: my$t = "THUMBS UP SIGN, EMOJI MODIFIER FITZPATRICK TYPE";
print "$t-$_".parse-names for 3..6; # 👍🏼👍🏽👍🏾👍🏿

Or collate instead of just sort:

# sort by codepoint value
say <ä a o ö>.sort; # (a o ä ö)
# sort using Unicode Collation Algorithm
say <ä a o ö>.collate; # (a ä o ö)

Or use unicmp instead of cmp:

say "a" cmp "Z"; # More
say "a" unicmp "Z"; # Less

Or that you can now use any Unicode digits Match variables ($١ for $1), negative numbers (-١ for -1), and radix bases (:۳("22") for :3("22")).

It’s not for nothing that Santa considers Rakudo Perl 6 to have the best Unicode support of any programming language in the world!

## Skipping values

You can now call .skip on Seq and Supply to skip a number of values that were being produced. Together with .head and .tail this gives you ample manipulexity with Iterables and Supplies.

By the way, .head now also takes a WhateverCode so you can indicate you want all values except the last N (e.g. .head(*-3) would give you all values except the last three). The same goes for .tail (e.g. .tail(*-3) would give you all values except the first three).

Some additions to the Iterator role make it possible for iterators to support the .skip functionality even better. If an iterator can be more efficient in skipping a value than to actually produce it, it should implement the skip-one method. Derived from this are the skip-at-least and skip-at-least-pull-one methods that can be provided by an iterator.

An example of the usage of .skip to find out the 1000th prime number:

say (^Inf).grep(*.is-prime)[999]; # 7919

Versus:

say (^Inf).grep(*.is-prime).skip(999).head; # 7919

The latter is slightly more CPU efficient, but more importantly much more memory efficient, as it doesn’t need to keep the first 999 prime numbers in memory.

## Of Bufs and Blobs

Buf has become much more like an Array, as it now supports .push, .append, .pop, .unshift, .prepend, .shift and .splice. It also has become more like Str with the addition of a subbuf-rw (analogous with .substr-rw), e.g.:

my $b = Buf.new(100..105);$b.subbuf-rw(2,3) = Blob.new(^5);
say $b.perl; # Buf.new(100,101,0,1,2,3,4,105) You can now also .allocate a Buf or Blob with a given number of elements and a pattern. Or change the size of a Buf with .reallocate: my$b = Buf.allocate(10,(1,2,3));
say $b.perl; # Buf.new(1,2,3,1,2,3,1,2,3,1)$b.reallocate(5);

## Embedding Perl 6

Two new features make embedding Rakudo Perl 6 easier to handle:
the &*EXIT dynamic variable now can be set to specify the action to be taken when exit() is called.

Setting the environment variable RAKUDO_EXCEPTIONS_HANDLER to "JSON" will throw Exceptions in JSON, rather than text, e.g.:

$RAKUDO_EXCEPTIONS_HANDLER=JSON perl6 -e '42 = 666' { "X::Assignment::RO" : { "value" : 42, "message" : "Cannot modify an immutable Int (42)" } } ## Bottom of the Gift Bag While rummaging through the still quite full gift bag, Santa found the following smaller prezzies: ## Time to catch a Sleigh Santa would like to stay around to tell you more about what’s been added, but there simply is not enough time to do that. If you really want to keep up-to-date on new features, you should check out the Additions sections in the ChangeLog that is updated with each Rakudo compiler release. So, catch you again next year! Best wishes from ## Perl 6 Advent Calendar: Day 21 – Sudoku with Junctions and Sets ### Published by scimon on 2017-12-21T00:00:31 There are a number of core elements in Perl6 that give you powerful tools to do things in a concise and powerful way. Two of these are Junctions and Sets which share a number of characteristics but are also wildly different. In order to demonstrate the power of these I’m going to look at how they can be used with a simple problem, Sudoku puzzles. ## Sudoku : A refresher So for those of you who don’t know a Sudoku puzzle is a 9 by 9 grid that comes supplied with some cells filled in with numbers between 1 and 9. The goal is to fill in all the cells with numbers between 1 and 9 so that no row, column or sub square has more than one of any of the numbers in it. There’s a few ways to represent a Sudoku puzzle, my personal favourite being a 9 by 9 nested array for example : my @game = [ [4,0,0,0,0,0,0,0,0], [0,9,0,3,4,6,0,5,0], [5,8,0,0,9,0,0,0,6], [0,4,0,8,1,3,0,0,9], [0,0,0,5,0,4,0,0,0], [8,0,0,6,2,9,0,4,0], [3,0,0,0,5,0,0,6,2], [0,5,0,9,3,2,0,8,0], [0,0,0,0,0,0,0,0,1] ];  In this situation the cells with no value assigned are given a 0, this way all the cells have an Integer value assigned to them. The main thing to bear in mind with this format is you need to reference cells using @game[$y][$x] rather than @game[$x][$y]. ## Junctions : Quantum Logic Testing One of the simplest ways to use Junctions in Perl6 is in a logical test. The Junction can represent a selection of values you are wanting to test against. For example : if ( 5 < 1|10 < 2 ) { say "Spooky" } else { say "Boo" } Spooky So, not only does this demonstrate operator chaining (something that experienced programmers may already be looking confused about) but the any Junction ( 1|10 ) evaluates to True for both 5 < 10 and 1 < 2. In this way Junctions can be extremely powerful already, it’s when you assign a variable container to them that it gets really interesting. One of the tests we’d like to be able to make on our Sudoku puzzle is to see if it’s full. By which I mean every cell has been assigned a value greater than 0. A full puzzle may not be completed correctly but there’s a guess in each cell. Another way of putting that would be that none of the cells has a value of 0. Thus we can define a Junction and store it in a scalar variable we can test it at any point to see if the puzzle is full. my$full-test = none( (^9 X ^9).map(-> ($x,$y) {
@game[$y][$x];
} ) );
say so $full-test == 0; False In this case the game has a number of 0’s still in it so seeing if $full-test equals 0 evaluates to False. Note that without the so to cast the result to a Boolean you’ll get a breakdown of the cells that are equal to 0 only if all of these are False will the Junction evaluate to True.

Note also the use of the ^9 and X operators to generate two Ranges from 0 to 8 and then the cross product of these two lists of 9 characters to make a list of all the possible X,Y co-ordinates of the puzzle. It’s this kind of powerful simplicity that is one of the reasons I love Perl6. But I digress.

The strength of this method is that once you’ve defined the Junction you don’t need to modify it. If you change the values stored in the Array the Junction will look at the new values instead (note this only holds true for updating individual cells, if you swap out a whole sub array with a new one you’ll break the Junction).

So that’s a simple use of a Junction so store a multi-variable test you can reuse. But it gets more interesting when you realise that the values in a Junction can themselves be Junctions.

Lets look at a more complex test, a puzzle is complete if for every row, column and square in the puzzle there is only one of each number. In order to make this test we’re going to need three helper functions.

subset Index of Int where 0 <= * <= 8;
sub row( Index $y ) { return (^9).map( { ($_, $y ) } ); } sub col( Index$x ) {
return (^9).map( { ( $x,$_ ) } );
}
multi sub square( Index $sq ) { my$x = $sq % 3 * 3; my$y = $sq div 3 * 3; return self.square($x, $y ); } multi sub square( Index$x, Index $y ) { my$tx = $x div 3 * 3; my$ty = $y div 3 * 3; return ( (0,1,2) X (0,1,2) ).map( -> ($dx, $dy ) { ($tx + $dx,$ty + $dy ) } ); } So here we define an Index as a value between 0 and 8 and then define our sub‘s to return a List of List‘s with the sub lists being a pair of X and Y indices’s. Note that our square function can accept one or two positional arguments. In the single argument we define the sub squares with 0 being in the top left then going left to right with 8 being the bottom right. The two argument version gives use the list of cells in the square for a given cell (including itself). So with these in place we can define our one() lists for each row, column and square. Once we have them we can them put them into an all() junction. my$complete-all = all(
(^9).map(
{
|(
one( row( $_ ).map( -> ($x, $y ) { @game[$y][$x] } ) ), one( col($_ ).map( -> ( $x,$y ) {
@game[$y][$x]
} ) ),
one( square( $_ ).map( -> ($x, $y ) { @game[$y][$x] } ) ) ) } ) ); Once we have that testing to see if the puzzle is complete is quite simple. say [&&] (1..9).map( so$complete-all == * );
False

Here we test each possible cell value of 1 through 9 against the Junction, in each case this will be True if all the one() Junctions contains only one of the value. Then we use the [] reduction meta-operator to chain these results to give a final True / False value (True if all the results are True and False otherwise). Again this test can be reused as you add values to the cells and will only return True when the puzzle has been completed and is correct.

Once again we’ve got a complex test boiled down to a single line of code. Our $complete-all variable needs to be defined once and is then valid for the rest of the session. This sort of nested junction tests can reach many levels, a final example is if we want to test if a current puzzle is valid. By which I mean it’s not complete but it doesn’t have any duplicate numbers in and row, column or square. Once again we can make a Junction for this, for each row, column or square it’s valid if one or none of the cells is set to each of the possible values. Thus our creation of the Junction is similar to the $complete-all one.

$valid-all = all( (^9).map( { |( one( none( row($_ ).map( -> ( $x,$y ) {
@game[$y][$x]
} ) ),
one( row( $_ ).map( -> ($x, $y ) { @game[$y][$x] } ) ) ), one( none( col($_ ).map( -> ( $x,$y ) {
@game[$y][$x]
} ) ),
one( col( $_ ).map( -> ($x, $y ) { @game[$y][$x] } ) ) ), one( none( square($_ ).map( -> ( $x,$y ) {
@game[$y][$x]
} ) ),
one( square( $_ ).map( -> ($x, $y ) { @game[$y][$x] } ) ) ) ) } ) ); The test for validity is basically the same as the test for completeness. say [&&] (1..9).map( so$valid-all == * );
True

Except in this case our puzzle is valid and so we get a True result.

## Sets : Collections of Objects

Whilst the Junctions are useful to test values they aren’t as useful if we want to try solving the puzzle. But Perl6 has another type of collection that can come in very handy. Sets, (and their related types Bags and Mixes) let you collect items and then apply mathematical set operations to them to find how different Sets interact with each other.

As an example we’ll define a possible function  that returns the values that are possible for a given cell. If the cell has a value set we will return the empty list.

sub possible( Index $x, Index$y, @game ) {
return () if @game[$y][$x] > 0;

(
(1..9)
(-)
set(
( row($y).map( -> ($x, $y ) { @game[$y][$x] } ).grep( * > 0 ) ), ( col($x).map( -> ( $x,$y ) {
@game[$y][$x]
} ).grep( * > 0 ) ),
( square($x,$y).map( -> ( $x,$y ) {
@game[$y][$x]
} ).grep( * > 0 ) )
)
).keys.sort;
}

Here we find the different between the numbers 1 through 9 and the Set made up of the values of the row, column and square the given cell is in. We ignore cells with a 0 value using grep. As Sets store their details as unordered key / value pairs we get the keys and then sort them for consistency. Note that here we’re using the ascii (-) version of the operator, we could also use ∖ the Unicode version instead.

We could define the set as the union of each of the results from row, col and square and the result would be the same. Also we’re using the two argument version of square in this case.

It should be noted that this is the simplest definition of possible values, there’s no additional logic going on but even this simple result lets us do the simplest of solving algorithms. If this case we loop around every cell in the grid and if it’s got 1 possible value we can set the value to that. In this case we’ll loop round, get a list of cells to set, then loop through the list and set the values. If the list of ones to set is empty or the puzzle is complete then we stop.

my @updates;
repeat {
@updates = (^9 X ^9).map( -> ($x,$y) {
($x,$y) => possible($x,$y,@game)
} ).grep( *.value.elems == 1 );
for @updates -> $pair { my ($x, $y ) =$pair.key;
@game[$y][$x] = $pair.value[0]; } } while ( @updates.elems > 0 && ! [&&] (1..9).map( so$complete-all == * ) );

So we make a list of Pairs where the key is the x,y coordinates and the value is the possible values. Then we remove all those that don’t have one value. This is continued until there are no cells found with a single possible value or the puzzle is complete.

Another way of finding solutions is to get values that only appear in one set of possibilities in a given, row, column or square. For example if we have the following possibilities:

(1,2,3),(2,3,4),(),(),(4,5),(),(),(2,3,4),()

1 and 5 only appear in the row once each. We can make use of the symmetric set difference operator and operator chaining to get this.

say (1,2,3) (^) (2,3,4) (^) () (^) () (^) (4,5) (^) () (^) () (^) (2,3,4) (^) ()
set(1 5)

Of course in that case we can use the reduction meta-operator on the list instead.

say [(^)] (1,2,3),(2,3,4),(),(),(4,5),(),(),(2,3,4),()
set(1 5)

So in that case the algorithm is simple (in this case I’ll just cover rows, the column and square code is basically the same).

my @updates;
for ^9 -> $idx { my$only = [(^)] row($idx).map( -> ($x,$y ) { possible($x,$y,@game) } ); for$only.keys -> $val { for row($idx) -> ($x,$y) {
if $val (elem) possible($x,$y,@game) { @updates.push( ($x,$y) =>$val );
}
}
}
}

We then can loop through the updates array similar to above. Combining these two algorithms can solve a large number of Sudoku puzzle by themselves and simplify others.

Note we have to make two passes, firstly we get the numbers we’re looking for and then we have to look through each row and find where the number appears. For this we use the (elem) operator. Sets can also be referenced using Associative references for example:

say set(1,5){1}
True

## A note on Objects

So for all the examples so far I’ve used basic integers. But there’s nothing stopping you using Objects in your Junctions and Sets. There are a few things to bear in mind though, Sets use the === identity operator for their tests. Most objects will fail an identity check unless you have cloned them or have defined the WHICH method in a way that will allow them to be compared.

For the Sudoku puzzle you may want to create a CellValue class that stores whether the number was one of the initial values in the puzzle. If you do this though you’ll need to override WHICH and make it return the Integer value of the Cell. As long as you are fine with an identity check being technically invalid in this case (two different CellValues may have the same value but the won’t be the same object) then you can put them in Sets.

I hope you’ve found this interesting, Junctions and Sets are two of the many different parts of Perl6 that give you power to do complex tasks simply. If you’re interested in the code here there’s a Object based version available to use you can install with :

zef install Game::Sudoku

## Strangely Consistent: Has it been three years?

007, the toy language, is turning three today. Whoa.

On its one-year anniversary, I wrote a blog post to chronicle it. It seriously doesn't feel like two years since I wrote that post.

On and off, in between long stretches of just being a parent, I come back to 007 and work intensely on it. I can't remember ever keeping a side project alive for three years before. (Later note: Referring to the language here, not my son.) So there is that.

So in a weird way, even though the language is not as far along as I would expect it to be after three years, I'm also positively surprised that it still exists and is active after three years!

In the previous blog post, I proudly announce that "We're gearing up to an (internal) v1.0.0 release". Well, we're still gearing up for v1.0.0, and we are closer to it. The details are in the roadmap, which has become much more detailed since then.

Noteworthy things that happened in these past two years:

Things that I'm looking forward to right now:

• Landing macro infix:<ff> in master, which is quite literally one small fix away at this point.
• Landing a huge object system refactor that really cleans up the language.
• is parsed, also only a few steps away.
• Making a better web site, focused around language tutorials and API documentation.
• Writing a 007 parser in pure 007.

I tried to write those in increasing order of difficulty.

All in all, I'm quite eager to one day burst into #perl6 or #perl6-dev and actually showcase examples where macros quite clearly do useful, non-trivial things. 007 is and has always been about producing such examples, and making them run in a real (if toy) environment.

And while we're not quite over that hump yet, we're perceptibly closer than we were two years ago.

Belated addendum: Thanks and hugs to sergot++, to vendethiel++, to raiph++ and eritain++, for sharing the journey with me so far.

## 6guts: A unified and improved Supply concurrency model

Perl 6 encourages the use of high-level constructs when writing concurrent programs, rather than dealing with threads and locks directly. These not only aid the programmer in producing more correct and understandable programs, but they also afford those of us working on Perl 6 implementation the opportunity to improve the mechanisms beneath the constructs over time. Recently, I wrote about the new thread pool scheduler, and how improving it could bring existing programs lower memory usage, better performance, and the chance to scale better on machines with higher core counts.

In this post, I’ll be discussing the concurrency model beneath Supply, which is the Perl 6 mechanism for representing an asynchronous stream of values. The values may be packets arriving over a socket, output from a spawned process or SSH command, GUI events, file change notifications, ticks of a timer, signals, and plenty more besides. Giving all of these a common interface makes it easier to write programs that compose multiple asynchronous data sources, for example, a GUI application that needs to update itself when files change on disk, or a web server that should shut down cleanly upon a signal.

Until recently, there were actually two different concurrency models, one for supply blocks and one for all of the methods available on a Supply. Few people knew that, and fewer still had a grasp of what that meant. Unfortunately, neither model worked well with the Perl 6.d non-blocking await. Additionally, some developers using supply/react/whenever in their programs ran into a few things that they had expected would Just Work, which in reality did not.

Before digging in to the details, I’d like to take a moment to thank Vienna.pm for providing the funding that allowed me to dedicate a good bit of time to this task. It’s one of the trickiest things I’ve worked on in a while, and having a good chunk of uninterrupted time to focus on it was really helpful. So, thanks!

### Supplies and concurrency

The first thing to understand about Supply, and supply blocks, is that they are a tool for concurrency control. The power of supply blocks (react also) is that, no matter how many sources of asynchronous data you tap using whenever blocks, you can be sure that only one incoming message will be processed at a time. The same principle operates with the various methods: if I Supply.merge($s1,$s2).tap(&some-code), then I know that even if $s1 or $s2 were to emit values concurrently, they will be pushed onward one at a time, and thus I can be confident that &some-code will be called with a value at a time.

These one-message-at-a-time semantics exist to enable safe manipulation of state. Any lexical variables declared within a supply block will exist per time the supply block is tapped, and can be used safely inside of it. Furthermore, it’s far easier to write code that processes asynchronous messages when one can be sure the processing code for a given message will run to completion before the next message is processed.

Another interesting problem for any system processing asynchronous messages is that of backpressure. In short, how do we make a source of messages emit them at a rate no greater than that of the processing logic? The general principle with Supply is that the sender of a message pays the cost of its processing. So, if I have $source.map(&foo).tap(&bar), then whatever emits at the source pays the cost of the map of that message along with the processing done by the tap callback. The principle is easy to state, but harder to deliver on. One of the trickiest questions resolves around recursion: what happens when a Supply ends up taking an action that results in it sending a message to itself? That may sound contrived, but it can happen very easily. When the body of a supply block runs, the whenever blocks trigger tapping of a supply. If the tapped Supply was to synchronously emit a message, we immediately have a problem: we can’t process it now, because that would violate the one-at-a-time rule. A real world example where this happens? A HTTP/2 client, where the frame serializer immediately emits the connection preface when tapped, to make sure it gets sent before anything else can be sent. (Notice how this in itself also relies on the non-interruption principle.) This example comes straight out of Cro’s HTTP/2 implementation. A further issue is how we apply the backpressure. Do we block a real thread? Or can we do better? If we go doing real blocking of thread pool threads, we’ll risk exhausting the pool at worst, or in the better case force the program to create more threads (and so use more memory) than it really needs. ### Where things stood So, how did we do on these areas before my recent work? Not especially great, it turned out. First, let’s consider the mechanism that was used for everything except the case of supply blocks. Supply processing methods generally check that their input Supply is serial – that is, delivered one message at a time – by calling its serial method. If not, they serialize it. (In fact, they all call the serialize method, which just returns identity if serial is True, thus factoring out the check). The upshot of this is that we only have to enforce the concurrency control once in a chain of operations that can’t introduce concurrency themselves. That’s good, and has been retained during my changes. So, the interesting part is how serialize enforces one-at-a-time semantics. Prior to my recent work, it did this using a Lock. This has a few decent properties: locks are pretty well optimized, and they block a thread in an OS-aware way, meaning that the OS scheduler knows not to bother scheduling the waiting thread until the lock is released. They also have some less good properties. One is that using Lock blocks the use of Perl 6.d non-blocking await in any downstream code (a held Lock can’t migrate between threads), which was a major motivator to look for an alternative solution. Even if that were not an issue, the use of Lock really blocks up a thread, meaning that it will not be available for the thread pool to use for anything else. Last, but certainly not least, Lock is a reentrant mutex – meaning that we could end up violating the principle that a message is completely processed before the next message is considered in some cases! For supply blocks, a different approach had been taken. The supply block (or react block) instance had a queue of messages to process. Adding to, or taking from, this queue was protected by a Lock, but that was only held in order to update the queue. Messages were not removed from the queue until they had been processed, which in turn provided a way of knowing if the block instance is “busy”. If a message was pushed to the block instance when it was busy, then it was put onto the queue…and that is all. So who paid for the message processing? It turns out, it was handled by the thread that was busy in the supply block at the time that message arrived! This is pretty sensible if the message was a result of recursion. However, it could lead to violation of the principle that the sender of a message should pay for its processing costs. An unlucky sender could end up paying the cost of an unbounded number of messages of other senders! Interestingly enough, there haven’t been any bug reports about this, perhaps because most workloads simply don’t hit this unfairness, and those that do aren’t impacted by it anyway. Many asynchronous messages arrive on the thread pool, and it’s probably not going to cause much trouble if one thread ends up working away at a particular supply block instance that is being very actively used. It’s a thread pool, and some thread there will have to do the work anyway. The unfairness could even be argued to be good for memory caches! Those arguments don’t justify the problems of the previous design, however. Queues are pretty OK at smoothing out peaks in workloads, but the stable states of a queue are being full and being empty, and this was an unbounded queue, so “full” would mean “out of memory”. Furthermore, there was no way to signal back towards a producer that it was producing too fast. ### Towards a unified model: Lock::Async So, how to do better? I knew I wanted a unified concurrency control mechanism to use for both serialize and supply/react blocks. It had to work well with non-blocking await in Perl 6.d. In fact, it needed to – in the case a message could not be processed now, and when the sender was on the thread pool – do exactly what non-blocking await does: suspend the sender by taking a continuation, and schedule that to be run when the message it was trying to send could be sent. Only in the case that the sender is not a pool thread should it really block. Furthermore, it should be fair: message senders should queue up in order to have their message processed. On top of that, it needed to be efficient in the common case, which is when there is no contention. In response to these needs, I built Lock::Async: an asynchronous locking mechanism. But what is an asynchronous lock? It has a lock method which returns a Promise. If nothing else is holding the lock, then the lock is marked as taken (this check-and-acquire operation is implemented efficiently using an atomic operation) and the Promise that is returned will already be Kept. Otherwise, the Promise that is returned will be Kept when the lock becomes available. This means it can be used in conjunction with await. And – here’s the bit that makes this particularly useful – it means that it can use the same infrastructure built for non-blocking await in Perl 6.d. Thus, an attempt to acquire an asynchronous lock that is busy on the thread pool will result in that piece of work being suspended, and the thread can be used for something else. As usual, in a non-pool thread, real blocking (involving a condition variable) will take place, meaning those who need to be totally in control of what thread they’re running on, and so use Thread directly, will maintain that ability. (Examples of when that might be needed include writing bindings using NativeCall.) When the unlock method is called, then there are two cases. The first is if nobody contended for the lock in the meantime: in this case, then another atomic operation can be used to mark it as free again. Otherwise, the Promise of the first waiter in line is kept. This mechanism provides fairness: the lock is handed out to senders in the order that they requested it. Thus, using Lock::Async for concurrency control of supplies gives us: • A mechanism that, under no contention, is relatively cheap: a single atomic operation to acquire and another one to release. • Fairness: senders get the lock in the order they asked for it. • Non-blocking suspension of a sender that can not currently obtain the lock, enabling better utilization of the thread pool. • Working non-blocking await in supply/react blocks, or even Supply methods like do or map. • No accidental reentrancy (but we’ll need to care for recursion – more on that next). As an aside: as part of this I spent some time thinking about the semantics of await inside of a supply or react block. Should it block processing of other messages delivered to the block? I concluded that yes, it should: it provides a way to apply backpressure (for example, await of a write to a socket), and also means that await isn’t an exception to the “one message processed at a time, and processed fully” design principle. It’s not like getting the other behavior is hard: just use a nested whenever. ### Taps that send messages So, I put use of Lock::Async in place and all was…sort of well, but only sort of. Something like this: my$s = supply {
for ^10 -> $i { emit$i
}
}
react {
whenever $s { .say } }  Would hang. Why? Because the lock protecting the react block was obtained to run its mainline, setting up the subscription to $s. The setup is treated just like processing a message: it should run to completion before any other message is processed. Being able to rely on that is important for both correctness and fairness. The supply $s, however, wants to synchronously emit values as soon as it is tapped, so it tries to acquire the async lock. But the lock is held, so it waits on the Promise, but in doing so blocks progress of the calling react block, so the lock is never released. It’s a deadlock. An example like this did work under the previous model, though for not entirely great reasons. The 10 messages would be queued up, along with the done message of $s. Then, its work complete, the calling react block would get back control, and then the messages would be processed. This was OK if there was just a handful of messages. But something like this:

my $s = supply { loop { emit ++$;
}
}
react {
whenever $s { .say; } whenever Promise.in(0.1) { done; } }  Would hang, eating memory rapidly, until it ran out of memory, since it would just queue messages forever and never give back control. The new semantics are as follows: if the tap method call resulting from a whenever block being encountered causes an await that can not immediately be satisfied, then a continuation is taken, rooted at the whenever. It is put into a queue. Once the message (or initial setup) that triggered the whenever completes, and the lock is released, then those continuations are run. This process repeats until the queue is empty. What does this mean for the last two examples? The first one suspends at the first emit in the for ^10 { ... } loop, and is resumed once the setup work of the react block is completed. The loop then delivers the messages into the react block, producing them and having them handled one at a time, rather than queuing them all up in memory. The second example, which just hung and ate memory previously, now works as one would hope: it displays values for a tenth of a second, and then tears down the supply when the react block exits due to the done. This opens up supply blocks to some interesting new use cases. For example, this works now: my$s = supply {
loop {
await Promise.in(1);
emit ++$; } } react { whenever$s {
.say
}
}


Which isn’t itself useful (just use Supply.interval), but the general pattern here – of doing an asynchronous operation in a loop and emitting the result it gives each time – is. A supply emitting the results of periodic polling of a service, for example, is pretty handy, and now there’s a nice way to write it using the supply block syntax.

### Other recursion

Not all recursive message delivery results from synchronous emits from a Supply tapped by a whenever. While the solution above gives nice semantics for those cases – carefully not introducing extra concurrency – it’s possible to get into situations where processing a message results in another message that loops back to the very same supply block. This typically involves a Supplier being emitted into. This isn’t common, but it happens.

Recursive mutexes – which are used to implement Lock – keep track of which thread is currently holding the lock, using thread ID. This is the reason one cannot migrate code that is holding such a lock between threads, and thus why one being held prevents non-blocking await from being, well, non-blocking. Thus, a recursion detection mechanism based around thread IDs was not likely to end well.

Instead, Lock::Async uses dynamic variables to keep track of which async locks are currently held. These are part of an invocation record, and so can safely be transported across thread pool threads, meaning they interact just fine with the Perl 6.d non-blocking await, and the new model of non-blocking handling of supply contention.

But what does it do when it detects recursion? Clearly, it can’t just decide to forge ahead and process the message anyway, since that violates the “messages are processed one at a time and completely” principle.

I mentioned earlier that Lock::Async has lock and unlock methods, but those are not particularly ideal for direct use: one must be terribly careful to make sure the unlock is never missed. Therefore, it has a protect method taking a closure. This is then run under the lock, thus factoring out the lock and unlock, meaning it only has to be got right in one place.

There is also a protect-or-queue-on-recursion method. This is where the recursion detection comes in. If recursion is detected, then instead of the code being run now, a then is chained on to the Promise returned by lock, and the passed closure is run in the then. Effectively, messages that can’t be delivered right now because of recursion are queued for later, and will be sent from the thread pool.

This mechanism’s drawback is that it becomes a place where concurrency is introduced. On the other hand, given we know that we’re introducing it for something that’s going to run under a lock anyway, that’s a pretty safe thing to be doing. A good property of the design is that recursive messages queue up fairly with external messages.

### Future work

The current state is much better than what came before it. However, as usual, there’s more that can be done.

One thing that bothers me slightly is that there are now two different mechanisms both dealing with different cases of message recursion: one for the case where it happens during the tapping of a supply caused by a whenever block, and one for other (and decidedly less common) cases. Could these somehow be unified? It’s not immediately clear to me either way. My gut feeling is that they probably can be, but doing so will involve some different trade-offs.

This work has also improved the backpressure situation in various ways, but we’ve still some more to do in this area. One nice property of async locks is that you can check if you were able to acquire the lock or not before actually awaiting it. That can be used as feedback about how busy the processing path ahead is, and thus it can be used to detect and make decisions about overload. We also need to do some work to communicate down to the I/O library (libuv on MoarVM) when we need it to stop reading from things like sockets or process handles, because the data is arriving faster than the application is able to process it. Again, it’s nice that we’ll be able to do this improvement and improve the memory behavior of existing programs without those programs having to change.

### In summary…

This work has replaced the two concurrency models that previously backed Supply with a single unified model. The new model makes better use of the thread pool, deals with back-pressure shortcomings with supply blocks, and enables some new use cases of supply and react. Furthermore, the new approach interacts well with Perl 6.d non-blocking await, removing a blocker for that.

These are welcome improvements, although further unification may be possible, and further work on back-pressure is certainly needed. Thanks once again to Vienna.pm for helping me dedicate the time to think about and implement the changes. If your organization would like to help me continue the journey, and/or my Perl 6 work in general, I’m still looking for funding.

## 6guts: MoarVM Specializer Improvements Part 4: Argument Guards

So far in this series, I have discussed how the MoarVM dynamic optimizer gathers statistics, uses them to plan what to optimize, and then produces specialized versions of hot parts of a program to speed up execution. In this final part, I’ll look at how we switch from unoptimized code into optimized code, which centers around argument guards.

### But wait, what about code-gen?

Ah, yes, I knew somebody would ask that. At the end of part 3, we had a data structure representing optimized code, perhaps for a routine, method, or block. While going from bytecode to a CFG in SSA form was a fairly involved process, going back to bytecode is far simpler: we iterate the basic blocks in order, iterate each of the instructions within a basic block, and write out the bytecode for each of instructions. Done!

There are, of course, a few complications to take care of. When we have a forward branch, we don’t yet know the offset within the bytecode of the destination, so a table is needed to fix those up later. Furthermore, a new table of exception handler offsets will be needed, since the locations of the covered instructions and handlers will have moved. Beyond those bits of bookkeeping, however, there’s really not much more to it than a loop spitting out bytecode from instruction nodes.

Unlike bytecode that is fed into the VM from the outside, we don’t spend time doing validation of the specialized bytecode, since we can trust that it is valid – we’re generating it in-process! Additionally, the specialized bytecode may make use of “spesh ops” – a set of extra opcodes that exist purely for spesh to generate. Some of them are non-logging forms of ops that would normally log statistics (no point logging after we’ve done the optimizations), but most are for doing operations that – without the proofs and guards done by spesh – would be at risk of violating memory safety. For example, there’s an op that simply takes an object offset and reads a pointer or integer from a certain number of bytes into it, which spesh can prove is safe to do, but in general would not be.

What I’ve described so far is the portable behavior that we can do on any platform. So it doesn’t matter whether you’re running MoarVM on x86, x64, ARM, or something else, you can take advantage of all the optimizations that spesh can do. On x64, however, we can go a step further, and compile the spesh graph not back into specialized MoarVM bytecode, but instead into machine code. This eliminates the interpreter overhead. In MoarVM, we tend to refer to this stage as “the JIT compiler”, because most people understand JIT compilation as resulting in machine code. In reality, what most other VMs call their JIT compiler spans the same space that both spesh and the MoarVM JIT cover between them. MoarVM’s design means that we can deliver performance wins on all platforms we can run on, and then an extra win on x64. For more on the machine code generation process, I can recommend watching this talk by brrt, who leads work on it.

### Argument guards

By this point, we have some optimized code. It was generated for either a particular callsite (a certain specialization) or a combination of callsite and incoming argument types (an observed type specialization). Next, we need a mechanism that will, upon a call, look at the available specializations and see if any of them match up with the incoming arguments. Provided one is found that matches, we can then call it.

My original approach to this was to simply have a list of specializations, each tagged with a callsite and, for each object argument index, an expected type, whether we wanted a type object or a concrete object, and – for container types like Scalar – what type we expected to find on the inside of the container. This was simple to implement, but rather inefficient. Even if all of the type specializations were for the same callsite, it would be compared for each of them. Alternatively, if there were 4 specializations and 3 were on the same callsite, and one was on a second callsite, we’d have to do 3 failed comparisons on it to reach the final one that we were hunting.

That might not sound overly bad, because comparing callsites is just comparing pointers, and so somewhat cheap (although it’s branching, and branches aren’t so friendly for CPUs). Where it gets worse is that parameter type checks worked the same way. Therefore, if there were 4 specializations of the same callsite, all of them against a Scalar argument with 4 different types of value inside of it, then the Scalar would have to be dereferenced up to 4 times. This isn’t ideal.

My work during the summer saw the introduction of a new, tree-structured, approach. Each node in the tree represents either an operation (load an argument to test, read a value from a Scalar container) with a single child node, or a test with two child nodes representing “yes” and “no”. The leaves of the tree either indicate which specialization to use, or “no result”.

The tree structure allows for loads, tests, and dereferences to be lifted out. Therefore, each argument needs to be loaded once, checked against a particular type once, and dereferenced once if it’s a container. So, if there were to be specializations for (Scalar:D of Int:D, Str:D) and (Scalar:D of Int:D, Num:D), then the first argument would be loaded one time and tested to see if it is a Scalar. If it is, then it will be dereferenced once, and the resulting value tested to see if it’s an Int. Both alternatives for the second argument are placed in the tree underneath this chain of tests, meaning that they do not need to be repeated.

Arg guard trees are dumped in the specializer log for debugging purposes. Here is how the output looks for the situation described above:

0: CALLSITE 0x7f5aa3f1acc0 | Y: 1, N: 0
1: LOAD ARG 0 | Y: 2
2: STABLE CONC Scalar | Y: 3, N: 0
3: DEREF_RW 0 | Y: 4, N: 0
4: DEREF_VALUE 0 | Y: 5, N: 0
5: STABLE CONC Int | Y: 6, N: 0
6: LOAD ARG 1 | Y: 7
7: STABLE CONC Int | Y: 8, N: 9
8: RESULT 0
9: STABLE CONC Str | Y: 10, N: 0
10: RESULT 1


As the output suggests, the argument guard tree is laid out in a single block of memory – an array of nodes. This gives good cache locality on the lookups, and – since argument guard trees are pretty small – means we can use a small integer type for the child node indices rather than requiring a pointer worth of space.

### Immutability wins performance

Additional specializations are generated over time, but the argument guard tree is immutable. When a new specialization is generated, the existing argument guard tree is cloned, and the clone is modified to add the new result. That new tree is then installed in place of the previous one, and the previous one can be freed at the next safe point.

Why do this? Because it means that no locks need to be acquired to use a guard tree. In fact, since spesh runs on a single thread of its own, no locks are needed to update the guard trees either, since the single specializer thread means those updates are naturally serialized.

### Calls between specialized code

In the last part of the series, I mentioned that part of specializing a call is to see if we can map it directly to a specialization. This avoids having to evaluate the argument guard tree of the target of the call, which is a decidedly nice saving. As a result, most uses of the argument guard are on the boundary between unspecialized and specialized code.

But how does the optimizer see if there’s a specialization of the target code that matches the argument types being passed? It does it by evaluating the argument guard tree – but on facts, not real values.

### On Stack Replacement

Switching into specialized code at the point of a call handles many cases, but misses an important one: that where the hot code is entered once, then sits in a loop for a long time. This does happen in various real world programs, but it’s especially common in benchmarks. It’s highly desirable to specialize the hot loop’s code, if possible inlining things into the loop body and compiling the result into machine code.

I discussed detection of hot loops in an earlier part of this series. This time around, let’s take a look at the code for the osrpoint op:

OP(osrpoint):
if (MVM_spesh_log_is_logging(tc))
MVM_spesh_log_osr(tc);
MVM_spesh_osr_poll_for_result(tc);
goto NEXT;


The first part is about writing a log entry each time around the loop, which is what bumps the loop up in the statistics and causes a specialization to be generated. The call to MVM_spesh_osr_poll_for_result is the part that checks if there is a specialization ready, and jumps into it if so.

One way we could do this is to evaluate the argument guard in every call to MVM_spesh_osr_poll_for_result to see if there’s an appropriate optimization. That would get very pricey, however. We’d like the interpreter to make decent progress through the work until the optimized version of the code is ready. So what to do?

Every frame gets an ID on entry. By tracking this together with the number of specializations available last time we checked, we can quickly short-circuit running the argument guard when we know it will give the very same result as the last time we evaluated it, because nothing changed.

MVMStaticFrameSpesh *spesh = tc->cur_frame->static_info->body.spesh;
MVMint32 num_cands = spesh->body.num_spesh_candidates;
MVMint32 seq_nr = tc->cur_frame->sequence_nr;
if (seq_nr != tc->osr_hunt_frame_nr || num_cands != tc->osr_hunt_num_spesh_candidates) {
/* Check if there's a candidate available and install it if so. */
...

/* Update state for avoiding checks in the common case. */
tc->osr_hunt_frame_nr = seq_nr;
tc->osr_hunt_num_spesh_candidates = num_cands;
}


If there is a candidate that matches, then we jump into it. But how? The specializer makes a table mapping the locations of osrpoint instructions in the unspecialized code to locations in the specialized code. If we produce machine code, a label is also generated to allow entry into the code at that point. So, mostly all OSR does is jump execution into the specialized code. Sometimes, things are approximately as easy as they sound.

There’s a bonus feature hidden in all of this. Remember deoptimization, where we fall back to the interpreter to handle rarely occurring cases? This means we’ll encounter the osrpoint instructions in the unoptimized code again, and so – once the interpreter has done with the unusual case – we can enter back into the specialized, and possibly JIT-compiled code again. Effectively, spesh factors your slow paths out for you. And if you’re writing a module, it can do it differently based on different application’s use cases of the module.

### Future idea: argument guard compilation to machine code

At the moment, the argument guard tree is walked by a little interpreter. However, on platforms where we have the capability, we could compile it into machine code. This would perhaps allow branch predictors to do a bit of a better job, as well as eliminate the overhead the interpreter brings (which, given the ops are very cheap, is much more significant here than in the main MoarVM interpreter).

### That’s all, folks

I hope this series has been interesting, and provided some insight into how the MoarVM specializer works. My primary reason for writing it was to put my recent work on the specializer, funded by The Perl Foundation, into context, and I hope this has been a good bit more interesting than just describing the changes in isolation.

Of course, there’s no shortage of further opportunities for optimization work, and I will be reporting on more of that here in the future. I continue to be looking for funding to help make that happen, beyond what I can do in the time I have aside from my consulting work.

## A useful and usable production distribution of Perl 6

On behalf of the Rakudo and Perl 6 development teams, I’m pleased to announce the October 2017 release of “Rakudo Star”, a useful and usable production distribution of Perl 6. The tarball for this release is available from https://rakudo.perl6.org/downloads/star/.

Binaries for macOS and Windows (64 bit) are also available at the same location.

This is a post-Christmas (production) release of Rakudo Star and implements Perl v6.c. It comes with support for the MoarVM backend (all module tests pass on supported platforms). Currently, Star is on a quarterly release cycle.

IMPORTANT: “panda” has been removed from this release since it is deprecated. Please use “zef” instead.

Please note that this release of Rakudo Star is not fully functional with the JVM backend from the Rakudo compiler. Please use the MoarVM backend only.

In the Perl 6 world, we make a distinction between the language (“Perl 6”) and specific implementations of the language such as “Rakudo Perl”.

This Star release includes release 2017.10 of the Rakudo Perl 6 compiler, version 2017.10 MoarVM, plus various modules, documentation, and other resources collected from the Perl 6 community.

The Rakudo compiler changes since the last Rakudo Star release of 2017.07 are now listed in “2017.08.md”, “2017.09.md”, and “2017.10.md” under the “rakudo/docs/announce” directory of the source distribution.

Notable changes in modules shipped with Rakudo Star:

+ DBIish: Newer version (doesn't work with 2017.01 anymore)
+ Test-META: New. also dependencies (JSON-Class, JSON-Marshal, JSON-Name, JSON-Unmarshal and META6)
+ doc: Too many to list. p6doc-index merged into p6doc and index built on first run.
+ p6-Template-Mustache: Fixed on Windows.
+ perl6-datetime-format: New (heavily used in ecosystem)
+ perl6-file-which: Fixed tests on Windows
+ tap-harness6: Many fixes.
+ zef: New version with many fixes


There are some key features of Perl 6 that Rakudo Star does not yet handle appropriately, although they will appear in upcoming releases. Some of the not-quite-there features include:

• non-blocking I/O (now works for sockets and process)
• some bits of Synopsis 9 and 11

There is an online resource at http://perl6.org/compilers/features that lists the known implemented and missing features of Rakudo’s backends and other Perl 6 implementations.

In many places we’ve tried to make Rakudo smart enough to inform the programmer that a given feature isn’t implemented, but there are many that we’ve missed. Bug reports about missing and broken features are welcomed at rakudobug@perl.org.

See https://perl6.org/ for links to much more information about Perl 6, including documentation, example code, tutorials, presentations, reference materials, design documents, and other supporting resources. Some Perl 6 tutorials are available under the “docs” directory in the release tarball.

The development team thanks all of the contributors and sponsors for making Rakudo Star possible. If you would like to contribute, see http://rakudo.org/how-to-help, ask on the perl6-compiler@perl.org mailing list, or join us on IRC #perl6 on freenode.

## 6guts: MoarVM Specializer Improvements Part 3: Optimizing Code

Finally, after considering gathering data and planning what to optimize, it’s time to look at how MoarVM’s dynamic optimizer, “spesh”, transforms code into something that should run faster.

### Specialization

The name “spesh” is short for “specializer”, and this nicely captures the overall approach taken. The input bytecode the optimizer considers is full of genericity, and an interpreter of that bytecode needs to handle all of the cases. For example, a method call depends on the type of the invocant, and in the general case that may vary from call to call. Worse, while most of the time the method resolution might just involve looking in a cache, in other cases we might have to call into find_method to do a more dynamic dispatch. That in turn means method lookup is potentially an invocation point, which makes it harder to reason about the state of things after the method lookup.

Of course, in the common case, it’s a boring method lookup that will be found in the cache, and in many cases there’s only one invocant type that shows up at a given callsite at runtime. Therefore, we can specialize the code for this case. This means we can cache the result of the method lookup, and since that is now a constant, we may be able to optimize the call itself using that knowledge.

This specialization of method resolution is just one of many kinds of specialization that we can perform. I’ll go through a bunch more of them in this post – but before that, let’s take a look at how the code to optimize is represented, so that these optimizations can be performed conveniently.

### Instruction Nodes, Basic Blocks, and CFGs

The input to the optimization process is bytecode, which is just a bunch of bytes with instruction codes, register indexes, and so forth. It goes without saying that doing optimizations by manipulating that directly would be pretty horrible. So, the first thing we do is make a pass through the bytecode and turn every instruction in the bytecode into an instruction node: a C structure that points to information about the instruction, to the previous and next instructions, and contains information about the operands.

This is an improvement from the point of view of being able to transform the code, but it doesn’t help the optimizer to “understand” the code. The step that transforms the bytecode instructions into instruction nodes also performs a second task: it marks the start and end of each basic block. A basic block is a sequence of instructions that will run one after the other, without any flow control. Actually, this definition is something we’ve gradually tightened up over time, and was particularly refined during my work on spesh over the summer. By now, anything that might invoke terminates a basic block. This means that things like findmethod (which may sometimes invoke find_method on the meta-object) and decont (which normally takes a value out of a Scalar container, but may have to invoke a FETCH routine on a Proxy) terminate basic blocks too. In short, we end up with a lot of basic blocks, which makes life a bit harder for those of us who need to read the spesh debug output, but experience tells that the more honest we are about just how many things might cause an invocation, the more useful it is in terms of reasoning about the program and optimizing it.

The basic blocks themselves are only so useful. The next step is to arrange them into a Control Flow Graph (CFG). Each basic block is represented by a C structure, and has a list of its successors and predecessors. The successors of a basic block are those basic blocks that we might end up in after this one. So, if it’s a condition that may be taken or not, then there will be two successors. If it’s a jump table, there may be a lot more than two.

Once we have a control flow graph, it’s easy to pick out basic blocks that are not reachable from the root of the graph. These could never be reached in any way, and are therefore dead code.

One thing that we need to take special care of is exception handlers. Since they work using dynamic scope, then they may be reached only through a callee, but we are only assembling the CFG for a single routine or block. We need to make sure they are considered reachable, so they aren’t deleted as dead code. One safe and conservative way to do this is to add a dummy entry node and link them all from it. This is what spesh did for all exception handlers, until this summer. Now, it only does this for catch handlers. Control exception handlers are instead made reachable by linking them into the graph as a successor to all of the basic blocks that are in the region covered by the handler. Since these are used for redonext, and last handling in loops, it means we get much more precise flow control information for code with loops in. It also means that control exception handlers can now be eliminated as dead code too, if all the code they cover goes away. Making this work out meant doing some work to remove assumptions in various areas of spesh that exception handlers were never eliminated as dead code.

### Dominance and SSA form

The control flow graph is a step forward, but it still falls short of letting us do the kind of analysis we’d really like. We’d also like to be able to propagate type information through the graph. So, if we know a register holds an Int, and then it is assigned into a second register, we’d like to mark that register as also containing an Int. The challenge here is that register allocation during code generation wants to minimize the number of registers that are needed – a good thing for memory use – and so a given register will have many different – and unrelated – usages over time.

To resolve this, the instructions are transformed into Static Single Assignment form. Each time a register is assigned to in the bytecode (this is where the “static” comes from), it is given a new version number. Thus, the first assignment to r2 would be named r2(1), the second r2(2), and so forth. This helps enormously, because now information can be stored about these renamed variables, and it’s easy to access the correct information upon each usage. Register re-use is something we no longer have to worry about in this model.

So far so straightforward, but of course there’s a catch: what happens when we have conditionals and loops?

  if_i r1(1) goto LABEL1
set r2(1), r3(1)
goto LABEL2
LABEL1:
set r2(2), r4(1)
LABEL2:
# Which version of r2 reaches here???


This problem is resolved by the use of phi nodes, which “merge” the various versions at “join points”:

  if_i r1(1) goto LABEL1
set r2(1), r3(1)
goto LABEL2
LABEL1:
set r2(2), r4(1)
LABEL2:
PHI r2(3), r2(1), r2(2)


The placement of these phi nodes is an interesting problem, and that’s where dominance comes in. Basic block B1 dominates basic block B2 if every possible path to B2 in the flow control graph must pass through B1. There’s also a notion of immediate dominance, which you can think of as the “closest” dominator of a node. The immediate dominators form a tree, not a graph, and this tree turns out to be a pretty good order to explore the basic blocks in during analysis and optimization. Last but not least, there’s also the concept of dominance frontiers – that is, where a node’s dominance ends – and this in turn tells us where the phi nodes should be placed.

Thankfully, there’s a good amount of literature on how to implement all of this efficiently and, somewhat interestingly, the dominance algorithm MoarVM picked is a great example of how a cubic algorithm with a really low constant factor can beat a n*log(n) algorithm on all relevant problem sizes (20,000 basic blocks or so – which is a lot).

### Facts

Every SSA register has a set of facts associated with it. These include known type, known value, known to be concrete, known to be a type object, and so forth. These provide information that is used to know what optimizations we can do. For example, knowing the type of something can allow an istype operation to be turned into a constant. This means that the facts of the result of that istype now include a known value, and that might mean a conditional can be eliminated entirely.

Facts are, in fact, something of a relative concept in spesh. In the previous parts of this series, I discussed how statistics about the types that show up in the program are collected and aggregated. Of course, these are just statistics, not proofs. So can these statistics be used to introduce facts? It turns out they can, provided we are willing to add a guard. A guard is an instruction that cheaply checks that reality really does match up with expectations. If it does not, then deoptimization is triggered, meaning that we fall back to the slow-path code that can handle all of the cases. Beyond the guard, the facts can be assumed to be true. Guards are another area that I simplified and made more efficient during my work in the summer.

Phi nodes and facts have an interesting relationship. At a phi node, we need to merge the facts from all of the joined values. We can only merge those things that all of the joined values agree on. For example, if we have a phi with two inputs, and they both have a fact indicating a known type Int, then the result of the phi can have that fact set on it. By contrast, if one input thinks it’s an Int and the other a Rat, then we don’t know what it is.

During my work over the summer, I also introduced the notion of a “dead writer”. This is a fact that is set when the writer of a SSA register is eliminated as dead code. This means that the merge can completely ignore what that input thinks, as it will never be run. A practical example where this shows up is with optional parameters:

sub process($data, :$normalize-first) {
if $normalize-first { normalize($data);
}
do-stuff-with($data); }  Specializations are produced for a particular callsite, meaning that we know if the normalize-first argument is passed or not. Inside of the generated code, there is a branch that sticks an Any into $normalize-first if no argument is received. Before my change, the phi node would not have any facts set on it, as the two sides disagreed. Now, it can see that one side of the branch is gone, and the merge can keep whichever facts survive. In the case that type named argument was not passed, then the if can also be eliminated entirely, as a type object is falsey.

### Specialization by arguments

So, time to look at some of the specializations! The first transformations that are applied relate to the instructions that bind the incoming arguments into the parameter variables that are to receive them. Specializations are keyed on callsite, and a callsite object indicates the number of arguments, whether any of them are natively typed, and – for named arguments – what names the arguments have. The argument processing code can be specialized for the callsite, by:

• Throwing out the checkarity instruction, if we know that the number of arguments being passed are in range.
• Directly reading arguments from the args buffer when no boxing/unboxing is required, instead of calling an op that checks if a box/unbox is needed. Alternatively, if a box/unbox operation is needed (passing native arg to object parameter and vice versa), then this can be inserted directly, and may be subject to further optimization and/or JIT compilation later.
• Directly reading named arguments by index into the arguments buffer, rather than having to do any kind of hash lookup or string comparisons (this makes named arguments as cheap to receive as positionals).
• Eliminating the “check all named arguments are used” instruction, if present and it can be proven to be unrequired.
• Eliminating branches related to optional arguments.
• Generating code to build a hash directly from the args buffer contents for slurpy named arguments.

### Attribute access specialization

Attribute access instructions pass the name of the attribute together with the type the attribute was declared in. This is used to resolve the location of the attribute in the object. There is a static optimization that adds a hint, which has to be verified to be in range at runtime and that does not apply when multiple inheritance is used.

When the spesh facts contain the type of the object that the attribute is being looked up in, then – after some sanity checks – this can be turned into something much cheaper: a read of the memory location at an offset from the pointer to the object. Both reads and writes of object attributes can receive this treatment.

### Decontainerization specialization

Many values in Perl 6 live inside of a Scalar container. The decont instruction is used to take a value out of a container (which may be a Scalar or a Proxy). Often, decont is emitted when there actually is no container (because it’s either impossible or too expensive to reason about that at compile time). When we have the facts to show it’s not being used on a container type, the decont can be rewritten into a set instruction. And when the facts show that it’s a Scalar, then it can optimized into a cheap memory offset read.

### Method resolution specialization

When both the name of a method and the type it’s being resolved on are known from the facts, then spesh looks into the method cache, which is a hash. If it finds the method, it turns the method lookup into a constant, setting the known value fact along the way. This, in the common case, saves a hash lookup per method call, which is already something, but knowing the exact method that will be run opens the door to some very powerful optimizations on the invocation of the resolved method.

### Invocation specialization

Specialization of invocations – that is, calling of subs, methods, blocks, and so forth – is one of the most valuable things spesh does. There are quite a few combinations of things that can happen here, and my work over the summer made this area of spesh a good deal more powerful (and, as a result, took some time off various benchmarks).

The most important thing for spesh to know is what, exactly, is the target of the invocation. In the best case, it will be a known value already. This is the case with subs resolved from the setting or from UNIT, as well as in the case that a findmethod was optimized into a constant. These used to be the only cases, until I also added recording statistics on the static frame that was the target of an invocation. These are analyzed and, if pretty stable, then a guard can be inserted. This greatly increases the range of invocations that can be optimized. For example, a library that can be configured with a callback may, if the program uses a consistent callback, and the callback is small, have the callback’s code inlined into the place that calls it in the library code.

The next step involves seeing if the target of the invocation is a multi sub or method. If so, then the argument types are used to do a lookup into the multi dispatch cache. This means that spesh can potentially resolve a multi-dispatch once at optimization time, and just save the result.

Argument types are worth dwelling on a little longer. Up until the summer, the facts were consulted to see if the types were known. In the case there was no known type from the facts, then the invocation would not be optimized. Now, using the far richer set of statistics, when a fact is missing but the statistics suggest there is a very consistent argument type, then a guard can be inserted. This again increases the range of invocations that can be optimized.

So, by this point a multi dispatch may have been resolved, and we have something to invoke. The argument types are now used to see if there is a specialization of the target of the invocation for those arguments. Typically, if it’s on a hot path, there will be. In that case, then the invocation can be rewritten to use an instruction that pre-selects the specialization. This saves a lot of time re-checking argument types, and is a powerful optimization that can reduce the cost of parameter type checks to zero.

Last, but very much not least, it may turn out that the target specialization being invoked is very small. In this case, a few more checks are performed to see if the call really does need its own invocation record (for example, it may do so if it has lexicals that will be closed over, or if it might be the root of a continuation, or it has an exit handler). Provided the conditions are met, the target may be inlined – that is, have its code copied – into the caller. This saves the cost of creating and tearing down an invocation record (also known as call frame), and also allows some further optimizations that consider the caller and callee together.

Prior to my work in the summer, it wasn’t possible to inline anything that looked up lexical variables in an outer scope (that is to say, anything that was a closure). Now that restriction has been lifted, and closures can be inlined too.

Finally, it’s worth noting that MoarVM also supports inlining specializations that themselves contain code that was inlined. This isn’t too bad, except that in order to support deoptimization it has to also be able to do multiple levels of uninlining too. That was a slight headache to implement.

### Conditional specialization

This was mentioned in passing earlier, but worth calling out again: when the value that is being evaluated in a conditional jump is known, then the branch can be eliminated. This in turn makes dead code of one side of the branch or the other. This had been in place for a while, but in my work during the summer I made it handle some further cases, and also made it more immediately remove the dead code path. That led to many SSA registers being marked with the dead writer fact, and thus led to more optimization possibilities in code that followed the branch.

### Control exception specialization

Control exceptions are used for things like next and last. In the general case, their target may be some stack frames away, and so they are handled by the exception system. This walks down the stack frames to find a handler. In some cases, however – and perhaps thanks to an inlining – the handler and the throw may be in the same frame. In this case, the throw can be replaced with a simple – and far cheaper – goto instruction. We had this optimization for a while, but it was only during the summer that I made it work when the throwing part was in code that had been inlined, making it far more useful for Perl 6 code.

### In summary

Here we’ve seen how spesh takes pieces of a program, which may be full of generalities, and specializes them for the actual situations that are encountered when the program is run. Since it operates at runtime, it is able to do this across compilation units, meaning that a module’s code may be optimized in quite different ways according to the way a particular consumer uses it. We’ve also discussed the importance of guards and deoptimization in making good use of the gathered statistics.

You’ll have noticed I mentioned many changes that took place “in the summer”, and this work was done under a grant from The Perl Foundation. So, thanks go to them for that! Also, I’m still looking for sponsors.

In the next, and final, part of this series, we’ll take a look at argument guards, which are used to efficiently pick which specialization to use when crossing from unspecialized to specialized code.

## gfldex: Racing Rakudo

In many racing sports telemetry plays a big role in getting faster.  Thanks to a torrent of commits by lizmat you can use telemetry now too!

perl6 -e 'use Telemetry; snapper(½); my @a = (‚aaaa‘..‚zzzz‘).pick(1000); say @a.sort.[*-1 / 2];'
zyzl
Telemetry Report of Process #30304 (2017-11-05T17:24:38Z)
No supervisor thread has been running
Number of Snapshots: 31
Initial Size:        93684 Kbytes
Total Time:          14.74 seconds
Total CPU Usage:     15.08 seconds

wallclock  util%  max-rss  gw      gtc  tw      ttc  aw      atc
500951  53.81     8424
500557  51.92     9240
548677  52.15    12376
506068  52.51      196
500380  51.94     8976
506552  51.74     9240
500517  52.45     9240
500482  52.33     9504
506813  51.67     6864
502634  51.63
500520  51.78     6072
500539  52.13     7128
503437  52.29     7920
500419  52.45     8976
500544  51.89     8712
500550  49.92     6864
602948  49.71     8712
500548  50.33
500545  49.92      320
500518  49.92
500530  49.92
500529  49.91
500507  49.92
506886  50.07
500510  49.93     1848
500488  49.93
500511  49.93
508389  49.94
508510  51.27      264
27636  58.33
--------- ------ -------- --- -------- --- -------- --- --------
14738710  51.16   130876

Legend:
wallclock  Number of microseconds elapsed
util%  Percentage of CPU utilization (0..100%)
max-rss  Maximum resident set size (in Kbytes)
gw  The number of general worker threads
tw  The number of timer threads
aw  The number of affinity threads


The snapper function takes an interval at which data is collected. On termination of the program the table above is shown.

The module comes with plenty of subs to collect the same data at hand and file your own report. What may be sensible in long running processes. Or you call the reporter sub by hand every now and then.

use Telemetry;

react {
snapper;
whenever Supply.interval(60) {
say report;
}
}

If the terminal wont cut it you can use http to fetch telemetry data.

Documentation isn’t finished nor is the module. So stay tuning for more data.

## rakudo.org: Main Development Branch Renamed from “nom” to “master”

If you track latest rakudo changes by means of a local checkout of the development repository, please take notice that we renamed our unusually-named main development branch from nom to the more traditional name master

This branch will now track all the latest changes. Attempting to build HEAD of the nom branch will display a message at some point during the build stage, informing that the branch was renamed. The build will then wait for user to press ENTER if they want to continue.

UPDATE 1: the blocking message has been disabled for now, to avoid too much impact on any of the bleeding edge users’ automations.

#### Rakudobrew

Rakudobrew has been updated to default to master branch now. Note that you need to upgrade rakudobrew to receive that change; run rakudobrew self-upgrade

Until updated, rakudobrew‘s standard instructions will build the no-longer actively updated nom branch.

To build latest and greatest rakudo use rakudobrew build moar master instead of rakudobrew build nom. In general, we advise against using rakudobrew if the goal is simply to track development version of the compiler. Instead, we recommend to build directly from repository. Regular end-users should continue using pre-built Rakudo Star distributions.

## Zoffix Znet: Rakudo Perl 6 Advent Calendar 2017 Call for Authors

### Published on 2017-10-24T00:00:00

Write a blog post about Rakudo Perl 6

## gfldex: There Is More Than One Way At The Same Time

The Perl 6 Rosattacode section for parallel calculations is terribly outdated and missing all the goodies that where added or fixed in the last few weeks. With this post I would like to propose an updated version for Rosettacode. If you believe that I missed something feel free to comment below. Please keep in mind that Rosettacode is for showing off, not for being comprehensive.

use v6.d.PREVIEW;

Perl 6 provides parallel execution of code via threads. There are low level custructs that start a thread or safely pause execution.

my $t1 = Thread.start({ say [+] 1..10_000_000 }); my$t2 = Thread.start({ say [*] 1..10_000 });
$t1.finish;$t2.finish;

my $l = Lock.new;$l.lock;
$t1 = Thread.start: {$l.lock; say 'got the lock'; $l.unlock }; sleep 2;$l.unlock;

$t1.finish;  When processing lists, one can use a highlevel Iterator created by the methods hyper and race. The latter may return values out of order. Those Iterators will distribute the elements of the list to worker threads that are in turn assigned to OS level threads by Rakudos ThreadPoolScheduler. The whole construct will block until the last element is processed. my @values = 1..100; sub postfix:<!> (Int$n) { [*] 1..$n } say [+] @values.hyper.map( ->$i { print '.' if $i %% 100;$i!.chars });


For for-lovers there are the race for and hyper for keyword for distributing work over threads in the same way as their respective methods forms.

race for 1..100 {
say .Str; # There be out of order dragons!
}

my @a = do hyper for 1..100 {
.Int! # Here be thread dragons!
}

say [+] @a;

Perl 6 sports constructs that follow the reactive programming model. One can spin up many worker threads and use threadsafe Channels or Supplys to move values from one thread to another. A react-block can combine those streams of values, process them and react to conditions like cleaning up after a worker thread is done producing values or dealing with errors. The latter is done by bottling up Exception-objects into Failure-objects that keep track of where the error first occured and where it was used instead of a proper value.

my \pipe = Supplier::Preserving.new;

start {
for $*HOME { pipe.emit: .IO if .f & .ends-with('.txt'); say „Looking in ⟨{.Str}⟩ for files that end in ".txt"“ if .IO.d; .IO.dir()».&?BLOCK when .IO.d; CATCH { default { note .^name, ': ', .Str; pipe.emit: Failure.new(.item); } } } pipe.done; } react { whenever pipe.Supply { say „Checking ⟨{.Str}⟩ for "Rosetta".“; say „I found Rosetta in ⟨{.Str}⟩“ if try .open.slurp.contains('Rosetta'); LAST { say ‚Done looking for files.‘; done; } CATCH { default { note .^name, ': ', .Str; } } } whenever Promise.in(60*10) { say „I gave up to find Rosetta after 10 minutes.“; pipe.done; done; } }  Many build-in objects will return a Supply or a Promise. The latter will return a single value or just convey an event like a timeout. In the example above we used a Promise in that fashion. Below we shell out to find and process its output line by line. This could be used in a react block if there are many different types of events to process. Here we just tap into a stream of values and process them one by one. Since we don’t got a react block to provide a blocking event loop, we wait for find to finish with await and process it’s exitcode. Anything inside the block given to .tap will run in its own thread. my$find = Proc::Async.new('find', $*HOME, '-iname', '*.txt');$find.stdout.lines.tap: {
say „Looking for "Rosetta" in ⟨$_⟩“; say „Found "Rosetta" in ⟨$_⟩“ if try .open.slurp.contains('Rosetta');
};

await $find.start.then: { say „find finished with exitcode: “, .result.exitcode; };  Having operators process values in parallel via threads or vector units is yet to be done. Both hyper operators and Junctions are candidates for autothreading. If you use them today please keep in mind side effects may provide foot guns in the future. ## gfldex: It’s Classes All The Way Down ### Published by gfldex on 2017-10-08T17:23:05 While building a cache for a web api that spits out JSON I found myself walking over the same data twice to fix a lack of proper typing. The JSON knows only about strings even though most of the fields are integers and timestamps. I’m fixing the types after parsing the JSON with JSON::Fast by coercively .map-ing . @stations.=hyper.map: { # Here be heisendragons! .<lastchangetime> = .<lastchangetime> ?? DateTime.new(.<lastchangetime>.subst(' ', 'T') ~ 'Z', :formatter(&ISO8601)) !! DateTime; .<clickcount> = .<clickcount>.Int; .<lastcheckok> = .<lastcheckok>.Int.Bool; (note "$_/$stations-count processed" if$_ %% 1000) with $++; .Hash };  The hyper helps a lot to speed things up but will put a lot of stress on the CPU cache. There must be a better way to do that. Then lizmat showed where Rakudo shows its guts. m: grammar A { token a { }; rule a { } } OUTPUT: «5===SORRY!5=== Error while compiling <tmp>␤Package 'A' already has a regex 'a' (did you mean to declare a multi-method?)␤  Tokens are regex or maybe methods. But if tokens are methods then grammars must be classes. And that allows us to subclass a grammar. grammar WWW::Radiobrowser::JSON is JSON { token TOP {\s* <top-array> \s* } rule top-array { '[' ~ ']' <station-list> } rule station-list { <station> * % ',' } rule station { '{' ~ '}' <attribute-list> } rule attribute-list { <attribute> * % ',' } token year { \d+ } token month { \d ** 2 } token day { \d ** 2 } token hour { \d ** 2 } token minute { \d ** 2 } token second { \d ** 2} token date { <year> '-' <month> '-' <day> ' ' <hour> ':' <minute> ':' <second> } token bool { <value:true> || <value:false> } token empty-string { '""' } token number { <value:number> } proto token attribute { * } token attribute:sym<clickcount> { '"clickcount"' \s* ':' \s* '"' <number> '"' } token attribute:sym<lastchangetime> { '"lastchangetime"' \s* ':' \s* '"' <date> '"' } token attribute:sym<lastcheckok> { '"lastcheckok"' \s* ':' \s* '"' <bool> '"' } }  Here we overload some tokens and forward calls to tokens that got a different name in the parent grammar. The action class follows suit. class WWW::Radiobrowser::JSON::Actions is JSON::Actions { method TOP($/) {
make $<top-array>.made; } method top-array($/) {
make $<station-list>.made.item; } method station-list($/) {
make $<station>.hyper.map(*.made).flat; # Here be heisendragons! } method station($/) {
make $<attribute-list>.made.hash.item; } method attribute-list($/) {
make $<attribute>».made.flat; } method date($_) { .make: DateTime.new(.<year>.Int, .<month>.Int, .<day>.Int, .<hour>.Int, .<minute>.Int, .<second>.Num) }
method bool($_) { .make: .<value>.made ?? Bool::True !! Bool::False } method empty-string($_) { .make: Str }

method attribute:sym<clickcount>($/) { make 'clickcount' =>$/<number>.Int; }
method attribute:sym<lastchangetime>($/) { make 'lastchangetime' =>$/<date>.made; }
method attribute:sym<lastcheckok>($/) { make 'lastcheckok' =>$/<bool>.made; }
}


In case you wonder how to call a method with such a funky name, use the quoting version of postfix:<.>.

class C { method m:sym<s>{} }
C.new.'m:sym<s>'()


I truncated the examples above. The full source can be found here. The .hyper-Version is still quite a bit faster but also heisenbuggy. In fact .hyper may not work at all when executed to fast after a program starts or when used in a recursive Routine. This is mostly due to the grammer being one of the oldest parts of Rakudo with the least amount of work to make it fast. That is a solvable problem. I’m looking forward to Grammar All The Things.

If you got grammars please don’t hide them. Somebody might need them to be classy.

## samcv: Grant Final Report

### Published on 2017-09-25T07:00:00

This contains the work since the last report as well as my final report.

## Work Since the Last Report

### Merged in Unicode Collation Algorithm

I merged the Unicode Collation Algorithm branch into MoarVM. Now that the sort is stable, the coll, unicmp and .collate operators in Rakudo are no longer experimental so use experimental :collation no longer is needed to use them.

The $*COLLATION dynamic variable is still hidden under experimental, since it is possible design changes could be made to them. ### Prepend In some of my other posts I talked about the difficulties of getting Prepend codepoints working properly. To do this I had changed how we store synthetics so as not to assume that the first codepoint of a synthetic is always the base character. This month I merged in change in synthetic representation and implemented the features which were now possible with the new representation. The feature was to detect which character is the base character and store its index in the synthetic. Most combiners, such as diacritics come after the base character and are Extend codepoints: a + ◌́. Prepend has the reverse functionality and comes before: ؀◌ + 9 (Arabic number sign + number). This required many assumptions our code rightfully made before Unicode 9.0 to be abandoned. When a synthetic is created, we now check to see if the first codepoint is a Prepend codepoint. If so, we keep checking until we find a codepoint that is not a Prepend. In most cases, the base character is the codepoint following the last Prepend mark. In degenerate cases there is no base character, which could be a grapheme composed of all Prepend’s or only Prepend’s and Extend’s. In these degenerate cases we set the first codepoint as the base character. Once I had that done, I was able to fix some of our ops which did not work correctly if there were Prepend characters. This included fixing our case changing op so it would now work on graphemes with Prepend marks. Since the case change ops apply the case change to the base codepoint, it is necessary for us to have the correct base character. Similarly, ordbaseat which gets the base codepoint also needed to be fixed. This allowed ignoremark to now work for graphemes with Prepend marks. ### Documentation I wrote documentation on our Unicode Collation Algorithm, which explains to the reader why the UCA solves, with examples of different single to multiple or multiple to single mappings of codepoints. It goes in a fair amount of detail on how it was implemented. ### UTF8-C8 #### Bugs with Encoding into UTF8-C8 Since MoarVM normalizes all input text, our way of dealing with not normalizing, is important to people who want their strings to be unchanged unchanged. Previously there was a bug where if something was a valid UTF8 storable value, such as a Surrogate or a value higher than 0x10FFFF, it would create a Str type with that value, even though it was not valid Unicode. It would then throw when this value was attempted to be used (since the Str type shouldn’t hold values higher than 0x10FFFF or Surrogates). As this is the only way we have of dealing with text unaltered, this seemed like a serious issue that would prevent UTF8-C8 from being usable in a production environment attempting to encode arbitrary data into UTF8-C8. [f112fbcf] #### Bugs While Working with UTF8-C8 Strings Another issue I fixed was that under concatenation, text replacement or renormalization, the UTF8-C8 codepoints would be "flattened". They would lose their special properties and instead start acting like any other set of Unicode codepoints (although unusual since it contains a private use character and a hex code of the stored value). I changed our codepoint iterator so that optionally you can choose to pass back UTF8-C8 Synthetics unchanged. We use Synthetics to store both UTF8-C8 values as well as storing graphemes which contain multiple codepoints. When iterating by codepoint on an already existing arbitrary string, we want to retain the UTF8-C8 codepoints and make sure they are not changed during the renormalization process. This has been fixed, and UTF8-C8 strings are now drastically more reliable, and hopefully, much more production ready. [2f71945d] ### Grapheme Caching and Move To The function which moves a grapheme iterator forward a specified number of graphemes now works even if we aren’t starting from the very start of the string. In this function we have a first loop which locates the correct strand, and had a second loop which would find the correct grapheme inside that strand. I refactored the grapheme locating code and was able to remove the loop. In the grapheme caching implementation we can save a lot of work by not creating a new iterator for every grapheme access. Not only that, I also sped it the move_to function about 30%. While the cached iterator reduces access to this function for the functions I added it to, there are still many which may seek for each grapheme requested, this will speed that up. ### Other MoarVM Changes I setup automated Appveyor builds for MoarVM so we get automated builds on Windows (Travis CI only builds MacOS and Linux builds). I fixed a segfault that occurred when compiling nqp on Alpine Linux which uses musl as its libc. I ended up reducing the depth of recursion in the optimize_bb() function when compiling nqp from 99 to 29 (3x reduction). In the Rakudo build, we have a 5x reduction in the depth of recursion. ## Final Report As I’ve already said many things in my previous grant reports (1, 2, 3, 4) I will iterate on some of the big things I did do, which is not exhaustive. For full details of all the changes please see my other grant reports as well as a partial list of bonus changes I made in MoarVM during the grant at the bottom of the page. The only thing not completed was implementing a whole new UCD backend. While I planned on doing this, I ended up choosing not to do so. I realized that it would not have been the best use of my time on the grant, as there were many more user noticeable changes I could do. Despite this, I did achieve the goals that the rewrite was intended to solve; namely making property values distinct for each property and making the UCD database generation more reproducible. While it is not exactly the same on different runs, the only thing that changes is the internal property codes which does not affect anything adversely. It is fully functional every time instead of database regeneration breaking our tests most of the time. Once the database was stable, I was then able to update our database to Unicode 10. Without my improvements regarding reproducibility and property values becoming distinct for each property, updating to Unicode 10 would have not been possible. In addition all Hangul (Korean) characters now have names in the Unicode database. A big thing I wanted to implement was the Unicode Collation Algorithm, which ended up being a total success. I was able to still retain the ability to choose which collation levels the user wanted to sort with as well as reverse the sort order of individual collation levels. Yet I did not only implement one algorithm, I also implemented the Knuth-Morris-Prat string search algorithm which can take advantage of repeated characters in the needle (can be multiple times faster if you have sections repeating). The Knuth-Morris-Pratt algorithm was adjusted to use either the new cached grapheme iterator or the simple lookup depending on if it was a flat or strand haystack as well. Indexing a strand based string with a one grapheme long needle was sped up by 9x by making a special case for this. Practically all string ops were sped up, often by multiple times due to getting MVM_string_get_grapheme_at_nocheck inlined. In addition to this, I changed the way we access strings in many of our most used string ops, intelligently using either grapheme iterators, cached grapheme iterators or direct access depending on circumstances. With the MVM_string_get_grapheme_at_nocheck inlined, the time to accessing graphemes with this function was sped up between 1.5x for strands and up to 2x for flat strings. Ops we use a lot, like the op that backs eq and nqp::eqat was given special casing for Strand ←→ Strand, Flat ←→ Flat and Strand ←→ Flat (which also covers Flat ←→ Strand as well). This special casing spec up eq by 1.7x when one is a strand and one is flat, and 2.5x when both strings are flat. Applying similar optimizations to index made a 2x speedup when haystack is flat and 1.5x speedup when haystack is a strand (on top of the previous improvements due to the Knuth-Morris-Pratt algorithm) I fixed a longstanding bug in NQP which caused 'ignoremark+ignorecase' operations to be totally broken. I fixed this by adding more MoarVM ops and refactoring the code to have many less branches. In MoarVM we now use a centralized function to do each variation of with/without ignorecase and ignore mark which is also fully compatible with foldcase operations as well as igoremark. Doing the ignoremark/ignorecase indexing work sped them up by multiple times, but then in addition to that, it became 2x faster when the haystack was made up of 10 strands by implementing a cached grapheme iterator. I implemented full Unicode 9.0 support not just in our grapheme segmentation, but also in our other ops, refactoring how we store synthetic codepoints to allow us to have the 'base' codepoint be a codepoint other than the 1st in the synthetic to support Prepend codepoints. Our concatenation was improved so as to make full renormalization of both input strings no longer needed in almost all cases. The x repeat operator was fixed so it always creates normalized strings. Previously it could create unnormalized strings instead, causing issues when it was used. I believe I have more than accomplished what I set out to do in this grant. I made tons of user facing changes: to speed, Unicode normalization support, full Unicode 9.0 support. I added awesome collation features and fixed all the major issues with decoding and working with UTF8-C8 representations. I have listed an incomplete list of bonus deliverables below the deliverables which were part of this project. ## Deliverables • I documented MoarVM’s string representation, with lots of good information for future developers as well as interested users. • Hangul syllables now have Unicode names in our database, with a test added in roast. • I implemented the Unicode Collation Algorithm [866623d9] • Tests have been added in roast for the UCA and the unicmp op • I wrote documentation on our Unicode Collation Algorithm implementation • Regarding language specific sort. This would involve us using data from the Unicode CLDR. Once we have this data available from MoarVM, it simply requires a conditional to override DUCET and check a different set of data before checking the DUCET data. This information is in our documentation for collation. • Text normalization • Speed of normalization was improved • Full Unicode 9.0 support for text segmentation and normalization was added • While I did not fully rewrite the database, I did solve the needed issues: • Property values are now unique for each property • Running the generation script creates a functional database every time it is run, rather than only some of the time. • I added Unicode Collation data to our database, generated from a Perl 6 script, which happened to be the only property required to complete my deliverables ## Bonus Deliverables Here is a somewhat complete list of bonus deliverables: • Updated our database to Unicode 10. This was only possible once I had fixed the problems with the database generation, and made property values unque. • Implemented Knuth-Morris-Pratt string search • Set up Appveyor builds. Appveyor builds and tests MoarVM on Windows, similar to Travis CI. • Fixed ignoremark+ignorecase regex when used together as well as huge speed increases. ### UTF8-C8/UTF-8 • Fix UTF8-C8 encoding so it can encode values > 0x10FFFF as well as surrogates • Fix UTF8-C8 strings so they do not get corrupted and flattened when string operations are performed on them. • MVM_string_utf8_decodestream: free the buffer on malformed UTF-8 [a22f98db] ### String Ops • Have MVM_string_codes iterate the string with codepoint iterator [ed84a632] • Make eqat 1.7x-2.5x faster [3e823659] • Speed up index 9x when Haystack is strand, needle is 1 long [0b364cb8] • Implement the Knuth-Morris-Pratt string search algorithm [6915d80e] • Add indexim_s op and improve/fix bugs in eqatim_s [127fa2ce] • Fix a bug in index/eqat(im) and in ord_getbasechar causing us to not decompose the base character when the grapheme was a synthetic [712cff33] • Fix MVM_string_compare to support deterministic comparing of synthetics [abc38137] • Added a codes op which gets the number of codepoints in a string rather than the number of graphemes. Rakudo is now multipe times faster doing the .codes op now. Before it would request an array of all the codepoints and then get number of elements, which was much slower. ### Fix string ops with Prepend characters • Rework MVMNFGSynthetic to not store base separately [3bd371f1] • Fix case change when base cp isn’t the first cp in synthetic [49b90b99] • For degenerate Synth’s with Prepend and Extend set base cp to 1st cp [db3102c4] • Fix ignoremark with Prepend characters and ordbaseat op [f8a639e2] ### Memory/GC/Build Fixes • Fix segfault when compiling nqp with musl as libc [5528083d] • Avoid recursion in optimize_bb() when only 1 child node [6d844e00] • Fix various memory access/garbage collection issues in some string ops that were showing up when running in Valgrind or using Address Sanitizer ### Grapheme Iteration • Ensure we can move forward in a grapheme iterator even if we aren’t starting at the very beginning of a string. • Use grapheme iterator cached for ignorecase/ignoremark index ops [b2770e27] • Optimize MVM_string_gi_move_to. Optimize the loop which finds the correct location within a strand so that it isn’t a loop and is just conditionals.[c2fc14dd] • Use MVMGraphemeIter_cached for strands in KMP index [ce76c994] • Allow MVM_string_get_grapheme_at_nocheck to be inlined • Refactor code into iterate_gi_into_string() to reduce code duplication [1e92fc96] ## Tests Added to Roast • Add tests for testing collation. Tests for the unicmp operator [5568a0d63] • Test that Hangul syllables return the correct Unicode name [6a4bc5450] • Add tests for case changes when we have Prepend codepoints [00554ccbd] • Add tests for x operator to ensure normalization retained [1e4fd2148] [59909ca9a] • Add a large number of string comparison tests [51c4ac08b] • Add tests to make sure synthetics compare properly with cmp [649d9dc50] • Improve ignoremark tests to cover many different cases [810e218c8] • Add ignoremark tests to cover JSON::Tiny regression + other issue [c185acc57] • Add generated tests (from UCD data) and manually created ones to ensure strings concatenation is stable, when the concatenated string would change the normalization. [2e1f7d92a][9f304b070] [64e385faf][0976d07b9][59909ca9a][2e1f7d92a] [88635564e] [a6bbc73cf] [a363e3ff1] • Add test for MoarVM Issue #566 .uniprop overflow [9227bc3d8] • Add tests to cover RT #128875 [0a80d0d2e] • Make new version of GraphemeBreakTest.t to better test grapheme segmentation [54c96043c] ## NQP Work Below is a listing of some of the commits I made to NQP. This included adding the ops I created over the course of the grant: eqatim, indexim, indexicim, eqaticim, and codes (gets the number of codepoints in a string rather than graphemes). The testing in NQP was inadequate for our string ops, so I added hundreds of tests for practically all of the string ops, so we could properly test the different variations of index* and eqat*. ### NQP Documentation • Add docs for a few variations of index/eqat [589a3dd5c] Bring the unicmp_s op docs up to date [091625799] • Document hasuniprop moar op [650840d74] ### NQP Tests • Add more index* tests to test empty string paths [8742805cb] • run indexicim through all of the indexic tests [26adef6be] • Add tests for RT #128875, ignorecase+ignoremark false positives [b96a0afe7] • Add tests for nqp::eqatim op on MoarVM [6fac0036e] ### Other Work • Added script to find undocumented NQP ops [0ead16794] • Added nqp::codes to QASTOperationsMAST.nqp [59421ffe1] • Update QASTRegexCompilerMAST to use new indexicim and eqaticim ops [18e40936a] • Added eqatim and indexim ops. Fix a bug when using ignoremark [9b94aae27] • Added nqp::hasuniprop op to QASTOperationsMAST.nqp [d03c4023a] ## p6steve: Clone Wars ### Published by p6steve on 2017-09-20T18:52:04 Apologies to those that have OO steeped in their blood. I am a wary traveller in OO space, maybe I am an technician, not an architect at heart. So for me, no sweeping frameworks unless and until they are needed. And, frankly, one can go a long way on procedural code with subroutines to gather repetitive code sequences. (And don’t get me started on functional programming…) Some time ago, I was tasked to write a LIMS in old perl. Physical ‘aliquots’ with injections of various liquids would be combined and recombined by bio-machines. This led to a dawning realization that these aliquot objects could be modelled in an object style with parent / child relationships. After a few weeks, I proudly delivered my lowly attempt at ‘Mu’ for this (and only this) problem. Kudos to the P6 team – after a couple of weeks in here, it’s just sensational the level of OO power that the real Mu delivers: Now, hard at work, on the perl6 version of Physics::Unit, I am wondering how to put the OO theory into productive practice. One of my aims was to find a medium sized (not core tools) problem that (i) I know something about and (ii) would be a good-sized problem to wrangle. So I am enjoying the chance to design some classes and some interfaces that will make this all hang together. But – as an explorer, it has become clear that I only have three options. The problem runs like this: • There is a parent Measure class that contains the methods for any real-world measurement handling dimension, units, value and errors. • Then there are child classes for my Distance$d = ’42 m’, my Time $t = 3 s’, etc. • First level, I have an operation like … my$r = ’23 ft’ + $d; Initially I had some success with object types ::T – but these only let you read the type and duplicate if needed for a new left hand side container. Then I tried the built in (shallow) clone method. But… • What about the operation … my$s = $d /$t?
• How do I create a new Speed object programatically?

# Fixes

## Grapheme Cluster Break

As I said last week I made improvements to the script that tests our breakup of graphemes. Now we have full support for the Prepend property that was added in Unicode 9.0, as well as passing all the tests for regional indicators. The only tests we now don't pass in GraphemeClusterBreakTest.t are a few emoji tests, and I believe we only fail 3 or so of these! The Prepend mark fixes needed us to save more state across parsing the code, as Prepend is different from all other Unicode grapheme break logic in that it comes before not after a base character.

## Igorecase+Ignoremark Regex

The longstanding bug I mentioned in my previous status report has now been fixed. The bug was in regex when both ignorecase and ignoremark adverbs were used.

say "All hell is breaking loose" ~~ m:i:m/"All is fine, I am sure of it"/
# OUTPUT«｢All hell is breaking loose｣␤» Output before the fix. This should not have matched.


This bug occurred when the entire length of the haystack was searched and all of the graphemes matched the needle.

If the needle exceeded the length of the haystack past that point, it would erroneously think there was a match there, as it only checked that it matched the whole length of the haystack.

Would cause 'fgh' to be found in: 'abcdefg'. This only occurred at the very end of the haystack.

The internal string_equal_at_ignore_case_INTERNAL_loop now returns -1 if there was no match and 0 or more if there was a match at that index.

This return value provides new information which is 0 if there was a match and some positive integer when the haystack was expanded when casefolding it.

As explained by my previous post, information about when characters expand when foldcased must be retained.

This information had been planned to be exposed in some way at a future date, as if we are searching for 'st' inside a string 'ﬆabc', nqp::indexic (index ignorecase) will indicate that it is located at index 0, and in Perl 6 Rakudo it will return 'ﬆa' when it should instead have returned 'ﬆ'.

For now this additional information is only internal and the return values of the nqp::indexic_s and nqp::equatic_s ops have not changed.

### NQP Codepath Problems…

Previously there were way too many different codepaths handling different variations of no regex adverbs, ignorecase, ignoremark, ignorecase+ignoremark. Problematically each combination had their own codepath. To really solve this bug and improve the code quality I decided to clean it up and correct this.

In my past work I had already added a nqp::indexic op, so it was time to add another! I added a nqp::indexicim op and a nqp::eqaticim op and was able to reuse most of the code and not increase our code burden much on the MoarVM side, and greatly reduce the possibility for bugs to get in on varying combinations of ignorecase/ignoremark ops.

This is was a very longstanding Unicode bug (I don't think both adverbs together ever worked) so it's great that it is now fixed :).

# Coming Up

I will be continuing to fix out the issues in the new Unicode Collation Algorithm implementation as I described earlier in this post. I also plan on taking stock of all of the current Grapheme Cluster Break issues, which only exist now for certain Emoji (though the vast majority of Emoji work properly).

I will also be preparing my talks for the Amsterdam Perl conference as well!

### Sidenote

I released a new module, Font::QueryInfo which allows you to query font information using FreeType. It can even return the codepoints a font supports as a list of ranges!