Perl 6 RSS Feeds

Steve Mynott (Freenode: stmuk) steve.mynott (at) / 2018-06-18T16:11:14

my Timotimo \this: No Major Breakthroughs

Published by Timo Paulssen on 2018-06-15T13:45:32

Sadly, the time since the last post on this blog hasn't been fruitful with regards to the profiling project. There have been slight improvements to the profiler inside MoarVM, like handling profiles with a very deep call graph better, making the first GC run show up again, capturing allocations from optional parameters properly, and hopefully finally making programs that have multiple threads running no longer crash during the profile dumping phase. A recently merged branch by esteemed colleague brrt will allow me to properly fix one nasty issue that remains in the profiler that relates to inlining.

Even though I can't show off lovely screenshots of the profiler UI (if you consider Coder's Art™ lovely), I can briefly go over the changes that have happened and what's next on the list. And of course I'm still very much interested in finishing the grant work!


Missed Optional Parameters

The first change I'd like to talk about is the one that was causing allocations from boxing optional parameters to go missing from the profile. Optional parameters are implemented as an op that accesses the passed arguments to see if something was present or not. Then it either runs code to put the default value in - if no argument was present - or it skips over that code. Additionally, it handles arguments that were passed as native ints, nums, or strings.

If an object was expected by the code that uses the parameter, this op will also create a box for the value, for example an Int object. The crucial mistake was in the instrumentation by the profiler.

Finding everything that is allocated is done by putting a little "take note of this object" op after every op that may create an object. This op then checks if the object was probably allocated by the last instruction, or if it was probably already logged earlier. If it was just allocated, that allocation is recorded for the profile.

The problem in this case lies in the placement of the logging op: It was placed right after the instruction that grabs the argument. However, that made it land in the place that gets skipped over if an argument was present. So either no argument was passed, and the logging op was just asked to log that a null was allocated, or an argument was passed that was perhaps boxed, and the logging op was skipped over. Oops!

Fixing this was simply a matter of following the skip and putting the logging op in the right place.

Multi-threaded Programs Crashing Mysteriously

If you used the profiler on code that runs multiple threads, you may have seen very suspicious looking internal error messages like "const_iX NYI" pop up. This was caused by the instrumentation aspect of the profiler, more specifically what it did when the instrumentation was no longer needed. Allow me to explain:

Instrumentation in this context refers to creating a version of the program bytecode that does some extra work in the right places. For the profiler this includes putting ops in the code that record that a function was called or exited, and ops that record allocations of objects.

This instrumentation happens lazily, i.e. when a function is entered the first time, it runs into the "instrumentation barrier", which pauses the program and creates the instrumented code right then and there. The instrumented code then gets installed and the program continues. This is implemented by having a global "instrumentation level" that just gets increased by 1 every time functions should go through an instrumentation step. This is done when profiling starts, and it is done when profiling ends.

Here's where the problem lies: Profiling is turned on before user code runs, which just happens to always be in single-threaded territory. However, profiling gets turned off as soon as the main thread is finished. This is done by increasing the instrumentation level by 1 again. Every function that is entered from now on will have to go through instrumentation again, which will restore the original bytecode in this case.

Other threads might still continue running, though. The first example that made this problem clear was finding the 1000st prime by grepping over a hypered range from 0 to infinity. Crucially, after finding the 1000st prime, some workers were still busy with their batch of numbers.

Here's where the instrumentation barrier becomes a problem. One of the remaining workers calls into a function, for example is-prime, for the first time since the instrumentation level was changed. It will have its instrumented bytecode replaced by the original bytecode. However, the other threads, which may still be inside is-prime in this example, will not know about this. They keep happily interpreting the bytecode when all of a sudden the bytecode changes.

Since the uninstrumented bytecode is shorter than the instrumented bytecode, the worst case is that it reads code past the end of the bytecode segment, but the more common case is that the instruction pointer just suddenly points either at the wrong instruction, or in the middle of an instruction.

Instructions usually start with the opcode, a 16 bit number usually between 0 and 1000. The next part is often a 16 bit number holding the index of a register, which is usually a number below about 40, but quite often below 10. If the instruction pointer accidentally treats the register number as an opcode, it will therefor often land on ops with low numbers. Opcode 0 is no_op, i.e. "do nothing". The next three ops are const_i8 through const_i32, which all just throw the exception that I mentioned in the first paragraph: "const_iX NYI". Two spots ahead is the op "const_n32", which also thrown as NYI error.

And there you have it, mystery solved. But what's the solution to the underlying problem? In this case, I took the easy way out. All the profiling ops first check if profiling is currently turned on or not anyway, so leaving the instrumented code in after profiling has ended is not dangerous. That's why MoarVM now just keeps instrumentation the same after profiling ends. After all, the next thing is usually dumping the profile data and exiting anyway.

The Next Steps

The MoarVM branch that brrt recently merged is very helpful for a very specific situation that can throw the profiler off and cause gigantic profile files: When a block has its bytecode inlined into the containing routine, and the block that was inlined had a "return" in it, it knows that it has to "skip" over the inner block, since blocks don't handle returns.

However, the block is still counted as a routine that gets entered and left. The long and short of it is that returning from the inner block jumps directly to the exit, but having the block inlined frees us from doing the whole "create a call frame, and tear it down afterwards" dance. That dance would have contained telling the profiler that a frame was exited "abnormally"; since the regular "prof_exit" op that would have recorded the exit will be skipped over, tearing down the frame would have contained the logging.

In this particular case, though, no exit would be logged! This makes the call graph - think of it like a flame graph - look very strange. Imagine a function being called in a loop, and returning from an inner block as described above. It would miss all of the exits, so every time the function is called again, it will look like the function called itself, never returning to the loop. Every time around the loop, the call will seem to be nested deeper and deeper. Since the profiler keeps around the whole call graph, the file will just keep growing with every single iteration.

Now, how does brrt's code change this situation? It will allow very easily to figure out how many inlines deep a "return from this routine" op is, so that the profiler can accurately log the right amount of exits.

On the UI side of things, I want to bring the routine overview list into a good state that will finally be worth showing. The list of GC runs will also be interesting, especially since the profiler recently learned to log how each individual thread performed its GC run, but the current HTML frontend doesn't know how to display that yet.

Hopefully the wait for the next post on my blog won't be as long as this time!
  - Timo

Zoffix Znet: A Call to Action: Polish Perl 6 First Steps Experience

Published on 2018-06-13T00:00:00

Help us make beginners' Perl 6 experience better

Weekly changes in and around Perl 6: 2018.24 Discussion Redirect

Published by liztormato on 2018-06-11T16:25:28

Zoffix Znet took action on the news of last week that our old IRC log website was no longer in the air out of GDPR considerations. In a blog post entitled “How To Make Old #perl6 IRC Log Links Work” he explains how he wrote the algorithm to map references to the old log website to the secondary logger. After which Moritz Lenz added a smart redirect on the old log website to make things more convenient. Meanwhile, Aleks-Daniel Jakimenko-Aleksejev has worked on making the same work for #moarvm, #perl6-dev and even previously-used #p6dev channels.

Spesh Plugins Merged

Jonathan Worthington merged his work on the MoarVM spesh plugin infrastructure, which started a good 6 weeks ago. In his blog post titled: Faster dispatches with MoarVM specializer plugins he explains the background, the problem, and how the new plugin structure will help optimizing in quite a few contexts (2.5x, 6.6x and 12x faster). All work that he’s doing as part of his most recent Perl Foundation grant.

Stack Hacking Improvements

The MoarVM internals work did not end with Jonathan’s work this week. Bart Wiegmans also published a blog post: Controlled Stack Hacking for the MoarVM JIT Compiler. He explains how a reorganization of some MoarVM internal data structures allows for much easier exception handling, reducing the memory footprint and general execution overhead as well.

Final Documentation Grant report

JJ Merelo has presented his Final Grant Report. Yours truly can only concur with the conclusions:

In general, I would say that the main objective of this grant, which was to improve the documentation, was achieved, not only by closing the outstanding issues, but also playing attention to channels where Perl 6 is mentioned and creating issues on the repo when it was needed.

And the additional reports that JJ Merelo has authored, are the icing on the very large cake!

Introduction to Application Development in Perl 6

Patrick Spek has sent in a Perl Foundation Grant Proposal to write a book about getting started with application development in Perl 6. Be sure to leave your comments!

Is Perl 6 faster than Perl 5 on average?

An interesting question on Reddit, with quite a few comments.

Running Perl 6 in Docker

Gabor Szabo shows how you can use a Docker image to run Perl 6 (Reddit comments).

Preliminary Schedule NLPW 2018

CowmelThere’s a preliminary schedule for the coming Dutch Perl Workshop, on Saturday 7 July 2018 in Arnhem, The Netherlands. So far, the Perl 6 presentations are in a majority! And on Sunday 8 July, Andrew Shitov is going to give a whole day “Introduction to Perl 6 Workshop“.

Other Core Developments

Meanwhile on Twitter

Meanwhile on StackOverflow

Meanwhile on FaceBook

Meanwhile on perl6-users

Perl 6 in comments

  • Not worried about obscurity by raiph.
  • You are doing it wrong by Reini Urban.
  • I am about to learn Perl 6 by haj.
  • Apologies if you’re not interested by raiph.
  • Implemented as Virtual Machines by jcrites.
  • Better tooling by smewp.
  • Perl 6 Modules

    New Modules:

    Updated Modules:

    Winding Down

    Quite a few very nice advancements this week! Some applications will see that more than others, at least the spectest seems to run about 1% faster. And that’s the worst case, as there’s not a lot of code there that runs repeatedly.

    Which brings me to something that does repeat: the Perl 6 Weekly. So see you next week, live from The Perl Conference in Salt Lake City!

    brrt to the future: Controlled Stack Hacking for the MoarVM JIT Compiler

    Published by Bart Wiegmans on 2018-06-10T16:29:00

    Hi readers! Today I have a story about a recently-merged set of patches that allows MoarVM to use the on-stack return pointer to reduce the overhead of exception handling and other VM features for JIT compiled code. Maybe you'll find it interesting.

    As you might know, MoarVM Is a bytecode interpreter. In some situations, MoarVM internals need to know the current position in the execution of the program. For instance in exception handling, all exception thrown within a block are caught by the associated CATCH block or propagated if no such block exists. Such blocks are indicated as a range within the bytecode, and we find the associated CATCH block by comparing the current position with the known ranges.

    This is relatively straightforward to implement for the interpreter, because the interpreter must maintain a 'current position' pointer simply to function. (MoarVM stores a pointer to this pointer in a thread context object so that it is available throughout the VM). For the JIT that is another matter, because the control flow is handled implicitly by the CPU. The instruction pointer register (called %rip on amd64) cannot be read directly. Moreover, as soon as you enter a function that might want to use the current address (like the functions responsible for exception handling), you've left the 'program' code and entered VM code.

    So what we used to do instead is take the address of a position within the bytecode (as indicated by a label in the bytecode, a somewhat involved process) and store that in a per-frame field called the jit_entry_label. This field is necessary to support another MoarVM feature  - we use the interpreter as a trampoline (in the first or second sense of that definition). Because the interpreter is not recursive, JIT compiled code needs to return to the interpreter to execute a subroutine that was invoked (as opposed to calling an interpreter function, as perl5 does for exception handling). The primary purpose of this label is to continue where we left off after returning from another invoked program. But it can be used just as well for finding where we are in the execution of the program.

    Only problem then is that we need to keep it up to date, which we did. On the entry of every basic block (uninterrupted sequence of code), we stored the current position in this field. This is quite common - every conditional statement, loop or other control flow change needs one, as well as every exception-handler scope change needed a little snippet storing the current position. This was annoying.

    Furthermore, there are numerous MoarVM instructions that might change the control flow (or might not). For instance, the instruction responsible for converting an object to a boolean value might need to invoke the Bool method specific to that objects' class - or, if no such method exists, fallback to a default implementation. We call such instructions invokish. When compiling code that contains such invokish instructions, we installed 'control guards' to check if the VM had in fact invoked another routine, and if so, to return to the interpreter to execute that routine. This too added quite a bit of overhead.

    I keep writing in the past tense because all of that is now gone, and that happened due to a simple realization. When we call a function (in C or assembly), we place the return address (the machine instruction after the call instruction) on the stack. We can read this value from the stack and use it wherever we want to know about the current position.

    I initially had implemented that using a stack walker function similar to the one in the link, except that I implemented it in assembly instead. (When writing this post I learned of the GCC __builtin_return_address and MSVC _ReturnAddress intrinsic functions, which presumably do the same thing). Unfortunately, that strategy didn't really work - it relies on the frame base pointer (%rbp) being placed right 'on top' of the return address pointer on the stack. Even with special compiler flags intended to preserve that behaviour, this assumption turned out to be unreliable.

    Fortunately I realized later that it was also unnecessary. Because the JIT compiler controls the layout of the compiled code frame, it also controls exactly where the return address will be stored when we compile a (C) function call. That means that we can simply take a pointer to this address and store that in the thread context structure. From that address, we can read exactly the current position in the compiled code, without having to explicitly store it so often. Furthermore, we can also write to this location, changing the address the function will return to. Effectively, this is a controlled 'on-stack goto', an idiom more often used for exploits than for good purposes - clearly this is an exception! We use this to force a return to the interpreter (with proper stack frame cleanup) for 'invokish' instructions that end up invoking. We can change control to go directly to an exception handler if it is in the same frame. This makes all the earlier control 'guard' fragments redundant, allowing us to remove them entirely. Thus, an invokish instruction that doesn't actually invoke now carries no extra cost.

    How much does this save? It depends a lot on the exact program, but I estimate about 5% of compiled code size, and from a hopelessly optimal (and fairly unrealistic) benchmark which I lifted from this blog post, approximately 10% of runtime. In real code, the effect is definitely nowhere near what jnthn++ or samcv++ achieved lately, but it's still nice. Also nice is that the code is quite a bit simpler than it was before.

    Anyway, that's all I have to tell today. Have fun hacking, and until next time!

    6guts: Faster dispatches with MoarVM specializer plugins

    Published by jnthnwrthngtn on 2018-06-09T00:01:42

    One of the goals for the current round of my Perl Foundation Performance and Reliability grant is to speed up private method calls in roles, as well as assignments in to Scalar containers. What I didn’t expect at the time I wrote the grant application is that these two would lead to a single new mechanism in MoarVM to make them possible.

    The Scalar container assignment improvements are still to come; currently I have a plan and hope to make good progress on it next week. I do, however, have a range of dispatch-related performance improvements to show, including the private method case.


    MoarVM runs programs faster by analyzing how they run and producing specialized versions of parts of the program based on that information. It takes note of which code is run often (frequently called methods and hot loops), which types a block of code is called with, what types are returned from calls, what code a closure points to, and more. Note that it observes the runtime behavior, and so is not dependent on whether the program has type annotations or not.

    Calls are one of the most important things that the optimizer considers, be they method calls, subroutine calls or invoking a received closure. Method calls are especially interesting, because with a call like $obj.meth($arg), the method to be called depends on the exact type of $obj. Often, we end up producing a version of the code that is specialized for a particular type of $obj. We can therefore resolve the method once in this specialization, saving the method lookup overhead.

    But there’s more. Once we know exactly what method we’ll be calling, and if the method is fairly small, we can inline it into the caller, thus eliminating the call overhead too. Further, since we are inlining a specialized version of the code and have already proved that we meet the conditions for using that specialization, we can eliminate type checks on parameters. Inlining is even more powerful than that: it opens the door to a wider range of analyses that would not be possible without it, which lead to futher program optimizations.

    The problem

    We can do this kind of optimization with method calls because MoarVM understands about method calls. It knows that if it is holding the type of the invocant constant, then the result of the dispatch can also be considered a constant.

    Unfortunately, there’s more than one case of method calling in Perl 6. While the majority of calls take the familiar $ form, we also have:

    In the first case, if the call is in a class, then we can resolve it at compilation time, since private methods aren’t virtual. Such calls are thus pretty fast. But what if the private method call is in a role? Well, then it was far slower. It took a method call on the meta-object, which then did a hash lookup to find the method, followed by invoking that method. This work was done by a call to a dispatch:<!> utility method. It was the same story for qualified calls and duck calls.

    So, let’s extend MoarVM to understand these kinds of calls?

    So if normal method calls are faster because MoarVM understands them, surely we can do better by teaching it to understand these other forms of calling too? Perhaps we could add some new ops to the VM to represent these kinds of calls?

    Maybe, but all of them come with their own rules. And those rules are already implemented in the metamodel, so we’d be doing some logic duplication. We make normal method calls fast by precomputing a method cache, which is just a hash table, and have the specializer do its lookups in that. While such an approach might work for private methods, it gets decidedly trickier in the other two cases. Plus those precomputed hashes take up a lot of space. There are hundreds of exception types in CORE.setting and every one of them has a precomputed hash table of all of its methods, with those methods from base classes denormalized in to it. This means hundreds of hashes containing mappings for all of the methods that are inherited from MuAny, and Exception. We do lazily deserialize these, which helps, but it’s still fairly costly. Introducing more such things, when I already want rid of that one, didn’t feel like a good direction.

    Let’s make MoarVM teachable

    Earlier in the post, I wrote this:

    It [the optimizer] knows that if it is holding the type of the invocant [of a method call] constant, then the result of the dispatch can also be held constant.

    And this is the key. The important thing isn’t that the specializer knows the precise semantics of the method dispatch. The important thing is that it knows the relationship between the arguments to a dispatch (e.g. the type that we’re calling the method on) and the result of the dispatch.

    This, along with considering the challenges of optimizing Scalar assignments, led me to the idea of introducing a mechanism in MoarVM where we can tell it about these relationships. This enables the specializer to insert guards as needed and then simply use the calculated result of the dispatch.

    Specializer plugins

    The new mechanism is known as “spesh plugins”, and I merged it into MoarVM’s master branch today. It works in a few steps. The first is that one registers a spesh plugin. Here’s the one for helping optimize private method calls:

    nqp::speshreg('perl6', 'privmeth', -> $obj, str $name {
        nqp::speshguardtype($obj, $obj.WHAT);
        $obj.HOW.find_private_method($obj, $name)

    The registration provides the language the plugin is for, the name of the plugin, and a callback. The callback takes an object and a method name. The second line is the key to how the mechanism works. It indicates that the result that will be returned from this plugin will be valid provided the type of $obj precisely matches (that is, with no regard to subtyping relationships) the type of the $obj we are currently considering. Therefore, it establishes a relationship between the invocant type and the private method call result.

    Then, we just need to compile a private method call like:

    self!foo($bar, $baz)


    nqp::speshresolve('privmeth', self, 'foo')(self, $bar, $baz)

    Taking care to only evaluate self once (obviously not a problem for self, but in general it can be any expression, and may have side-effects).

    And that’s it. So what happens at runtime?

    When the interpreter encounters this call for the first time, it calls the plugin. It then stores the result along with the conditions. On later calls made in the interpreter, it uses this mapping table to quite quickly map the invocant type into the appropriate result. It’s a little cache. (Aside: this is a little more involved because we want lookups without locking, but also need to cope with multiple threads creating resolution races. Thanks to a generalized free-at-safepoint mechanism in MoarVM, this isn’t so hard.)

    So that’s nice, and on its own would already be an improvement over what it replaced. But we haven’t even got to the exciting part yet! Each time we use this mapping, it records which mapping was used for the benefit of the optimizer. This information is stored in such a way that the specializer can work out which mappings are used with a particular set of parameter types to the method. So, in:

    role R {
        method foo() {
    class C1 does R {
        method !bar() { 1 }
    class C2 does R {
        method !bar() { 2 }

    The method foo might be invoked with invocants of type C1 and C2. Thus the mapping table for the call self!bar will have two entries. We may (if the code is hot) produce two specializations of method foo, and if we do, then we will also be able to see that there is only ever one target of the private method call in each case. Thus, we can inline the appropriate !bar into the matching specialization of foo.


    Writing a module PM.pm6 that contains:

    role R {
        method m() { self!p }
        method !p() { 42 }
    class C does R {
    for ^10_000_000 {

    And then running it with perl6 -I. -e 'use PM6' used to run in 5.5s on my development machine. That’s only 1.8 million iterations of the loop per second, which means each is eating a whopping 1,650 CPU cycles assuming a 3GHz CPU.

    With the new spesh plugin mechanism, it runs in 0.83s, over 6.5x faster. It’s over 12 million iterations of the loop per second, or around 250 CPU cycles per iteration. That’s still a good bit higher than would be good, but it’s a heck of a lot better.

    Note that due to the way roles are handled in non-precompiled code, the use of the spesh plugin will not happen at present in a role in a script, thus why in this case I put the code into a module. This restriction can be lifted later.

    But wait, there’s more

    I also wrote a spesh plugin for qualified dispatches, like $obj.Foo::meth(). This one guards on two of its inputs, and has an error case to handle. Notice how we can avoid replicating this logic inside of MoarVM itself and just write it in NQP code.

    nqp::speshreg('perl6', 'qualmeth', -> $obj, str $name, $type {
        nqp::speshguardtype($obj, $obj.WHAT);
        if nqp::istype($obj, $type) {
            # Resolve to the correct qualified method.
            nqp::speshguardtype($type, $type.WHAT);
            $obj.HOW.find_method_qualified($obj, $type, $name)
        else {
            # We'll throw an exception; return a thunk that will delegate to the
            # slow path implementation to do the throwing.
            -> $inv, *@pos, *%named {
                $inv.'dispatch:<::>'($name, $type, |@pos, |%named)

    This gave an even more dramatic speedup. The program:

    role R1 {
        method m() { 1 }
    role R2 {
        method m() { 2 }
    class C does R1 does R2 {
        method m() {
    for ^10_000_000 {

    Used to take 13.3s. With the spesh plugin in effect, it now takes 1.07s, a factor of more than 12x improvement.

    And even a little more…

    I also wondered if I could get $obj.?foo duck dispatches to do better using a spesh plugin too. The answer turned out to be yes. First of all, here’s the plugin:

    sub discard-and-nil(*@pos, *%named) { Nil }
    nqp::speshreg('perl6', 'maybemeth', -> $obj, str $name {
        nqp::speshguardtype($obj, $obj.WHAT);
        my $meth := $obj.HOW.find_method($obj, $name);
            ?? $meth
            !! &discard-and-nil

    There’s a couple of cases I decided to measure here. The first is the one where we wrote code with a .? call to handle the general case (for example, in a module), but then the program using the module always (or > 99% of the time) gives an object where we can call the method.

    class C {
    class D {
        method m() { 42 }
    for ^10_000_000 {
        (rand > 0.999 ?? C !! D).?m()

    The rand call, compare, and conditional are all costs in this code besides the call I wanted to measure, so it’s not such a direct measurement of the real speedup of .?. Still, this program went from taking 10.9s before to 4.29s with the spesh plugin in place – an improvement of 2.5x. It achieves this by doing a speculative inline of the method m anyway, and then using deoptimization to fall back to the interpreter to handle the 0.1% of cases where we get C and not D. (It then, at the end of the loop body, falls back into the hot code again.) Note that the inlining and deopt just naturally fell out of things the specializer already knew how to do.

    But had this come at the cost of making really polymorphic cases slower? Here’s another benchmark:

    class C {
    class D {
        method m() { 42 }
    for ^10_000_000 {
        (rand > 0.5 ?? C !! D).?m()

    This one goes from 7.60s to 4.92s, a 1.5x speedup. Spesh can’t just punt this to doing a deopt for the uncommon case, because there is no uncommon case. Still, the guard table scan comes out ahead.

    (By the way, I think a lot of the slowness in this code – though I didn’t think of it when I wrote the benchmark – is that rand returns a Num, but 0.5 and 0.999 are Rats, so it is doing a costly type coercion before comparing.)

    And what next?

    Next I’ll be taking on Scalar containers and assignment, seeing what I can do with spesh plugins there, and hoping my ideas lead to as positive results as has been seen here.

    Also, this isn’t the final word on the various benchmarks in this post either. I know full well that the current spesh plugin implementation is inserting some redundant guards, and a bit of effort on that front can probably get us another win.

    Zoffix Znet: How To Make Old #perl6 IRC Log Links Work

    Published on 2018-06-06T00:00:00

    Find out how where PerlGeek.De IRC log links link to

    Weekly changes in and around Perl 6: 2018.23 Opened/Closed

    Published by liztormato on 2018-06-04T23:23:04

    Jonathan Worthington has opened the Comma IDE Early Supporter Program for business with a simple Tweet.
    Comma IDE screen shot
    So if you’re a fan of using IDE’s, and you are or want to be working with Perl 6, now is your chance get this developer efficiency boost! (FaceBook and more FaceBook comments).

    Closed For Business

    After an 11-year run, Moritz Lenz has had to take the famous website offline because of GDPR considerations, especially about the German implementation of them.

    I’m sorry, but I had to shut down the IRC logs.

    I have found no way to make public IRC logs compliant with the privacy policies enforced by the GDPR and its implementations.

    The source code of the underlying software is still available from GitHub. Use it based on your own judgement and understanding of the law.

    I’d like to thank all contributors, users and visitors for their support.

    Hopefully we will be able to host this service somewhere else really soon. In the meantime, logs for some Perl 6 related channels can be found on #perl6, perl6-dev, #perl6-toolchain and #moarvm. Comments can be seen on /r/perl, /r/perl6, Hacker News and Twitter.

    I think a BIG THANK YOU is in order for Moritz Lenz for this (and many other services) he has rendered to the Perl 6 community!

    No European TPC Venue for 2019 yet

    More Bad News. Thomas Klausner tells us that still no proposals to host The European Perl Conference in 2019. Please get working on your proposals and mail them to Thank you in advance on behalf of all the current and future visitors!

    JJ Merelo’s Documentation Grant Completed

    JJ Merelo has completed his work on the Perl 6 documentation repository. His work is visible and we’re better for it. And he already has published a scientific report (Perl 6 documentation repository through time: dealing with issues) about it. Soon we should have the final grant report!

    Squashathon Results

    Last Saturday saw another Squashathon, this time focused on the Perl 6 bots. Among other improvements, all whateverable bots now support multifile gists. See the overview for more information.

    Perl 6 Tutorials at TPCiSLC

    There are still a few seats available at the Perl 6 tutorials at The Perl Conference in Salt Lake City (17 – 22 June 2018)!

    Also be sure to check out the The Perl Conference 2018 Newsletter: 2 June 2018 by Irish Pebbles.

    Call for Grant Proposals

    The May 2018 Round of Grant Proposals will close on June 9th. Make sure you get your Grant Proposals in! Whether they are for Perl in general, Perl 5 or Perl 6. Let The Perl Foundation help you do great things for the community!

    Blog Posts

    Core Developments

    Meanwhile on Twitter

    Meanwhile on StackOverflow

    Meanwhile on FaceBook

    Meanwhile on perl6-users

    Meanwhile on PerlMonks

    Perl 6 in comments

    Perl 6 modules

    New Modules:

    Updated Modules:

    Winding Down

    A very full, happy and a sad week. Hopefully next week only a full and happy week. See you then!

    Weekly changes in and around Perl 6: 2018.22 GDPR To You Too

    Published by liztormato on 2018-05-28T21:01:36

    There is no escape: the PAUSE now has a Privacy Policy, Neil Bowers explains in a blog post (Reddit comments). It’s good to see that the Perl community take action on such matters that affect us all.

    2018.05 Rakudo Compiler Release

    Thanks to the work of many, specifically Samantha McVey (MoarVM) and Aleks-Daniel Jakimenko-Aleksejev (nqp and Rakudo), there is a new Rakudo compiler release. And within a short time after that, pre-compiled Linux packages in several flavors were made by Claudio Ramirez.

    Perl Developer Survey Results

    The results of the Perl Developer Survey were published. Which caused some comments on Reddit. In that respect yours truly can only agree with a comment by Zoffix Znet:

    As for the survey itself, it’d be nice if it were clearer on whether it’s collecting data on Perl 5 and Perl 6 programmers or just on Perl 5. It starts off by referring to “Perl 5” in the first question and “Perl” in the next, suggesting it covers both languages, yet later questions (e.g. frameworks) only include Perl 5’s frameworks.

    It’s Squashathon Time Again This Weekend

    Well, technically for as long as it is Saturday 2 June 2018 anywhere in the world. The issues of the perl6/whateverable repository will be the focus of this Squashathon. Check out the wiki for more information and hope to see you there this weekend!

    Blog Posts

    Core Developments

    Most of these core developments happened after the 2018.05 Rakudo Compiler Release.

    Meanwhile on Twitter

    Meanwhile on StackOverflow

    Meanwhile on FaceBook

    Meanwhile on perl6-users

    Perl 6 in comments

    Perl 6 Modules

    New Modules:

    Updated modules:

    Winding Down

    With all-time record-breaking high temperatures all week where yours truly is writing this, it’s hard not to feel things are heating up. And that’s just the world, not Perl 6! See you next week to feel the heat of more Perl 6 goodies!

    Weekly changes in and around Perl 6: 2018.21 Mitigating Denial

    Published by liztormato on 2018-05-21T20:49:28

    Samantha McVey explains in an excellent blog post titled “Secure Hashing for MoarVM to Prevent DOS Attacks” how hash-based Denial Of Service attacks work, and what she has done so far and will do in the near future to prevent the evil-doers from getting their way. And how this may affect development and testing. Along with links how other programming languages have reacted to this kind of threat (/r/perl, /r/perl6 and FaceBook comments).

    Your help is needed!

    JJ Merelo describes the similarities between graffiti and the art of writing Perl 6 documentation. How you can be proud about your own contribution, but also about the ever-evolving result.

    What do you think about Perl 6?

    An interesting discussion on Reddit’s r/ProgrammingLanguages on the question:

    So Perl 6 is a successor of Perl 5. It is a new Perl that changes language syntax and adds new features like Grammars. What do you think about Perl 6?

    With some nice descriptions of the unique features of Perl 6 and how some people see its future.

    CaR TPF Grant Report

    Zoffix Znet has presented his (first) grant report for the CaR Grant. The progress has been mostly in getting a still better comprehension of the problem at hand, and how proposed solutions may or may not achieve the desired goals (Reddit and FaceBook comments).

    Getting started with Sparrowdo

    Patrick Spek has written a nice tutorial about the use of Sparrowdo titled “Getting started with Sparrowdo“. A must read if you’re looking into automating menial jobs.

    Welcome Tom Browder!

    It was not too long ago when Tom Browder submitted his first Pull Request. Since then, quite a few more Pull Requests were submitted by him. Last week he received a Rakudo commit bit so that he can now make changes to Rakudo without being Warnocked. Yours truly is looking forward to more contributions by him!

    Core Developments

    Other blog posts

    Meanwhile on Twitter

    Meanwhile on StackOverflow

    Meanwhile on FaceBook

    Meanwhile on perl6-users

    Perl 6 in comments

    Perl 6 Modules

    New modules:

    Updated modules:

    Winding Down

    Quite a diverse week again. Lots of blog posts. Not so many core developments just before the Rakudo Compiler Release 2018.05. But outside of that, wow! Can’t wait to see next week’s batch of goodies. So, until then!

    samcv: Secure Hashing for MoarVM to Prevent DOS Attacks

    Published on 2018-05-16T07:00:00

    Hashes are very useful data structures and underlie many internal representations in Perl 6 as well as being used as themselves. These data structures are very nice since they offer O(1) insertion time and O(1) lookup time on average. Hashes have long been considered an essential feature for Perl, much loved by users. Though when exploited, hashes can cause servers to grind to a halt. New in Rakudo Perl 6 2018.5 will be a feature called hash randomization which does much to help protect against this attack. In this article I explain some hashing basics as well as how the attack against non-randomized hashing can work.

    Table of Contents

    Hashing Basics

    Some hashing basics: when we use a hash, we take a string and come up with a unique integer to represent the string. Similar to how md5 or sha1 sums take an arbitrary amount of data and condense it into a shorter number which can identify it, we do a similar thing for strings.

    my %hash; %hash<foo> = 10

    In this code, MoarVM takes the string foo and performs a hashing function on it using a series of bitwise operations. The goal is to create a shorter number which allows us to put the foo key into one of the 8 buckets that MoarVM initializes when a hash is created.

    8 Hash buckets

    Our hashing code sets up a predefined number of buckets . When a bucket fills up to have 10 items it doubles the number of buckets. In normal operation the hashes will be randomly distributed, so it would take ≈47 keys added (≈47 is the average number of items to result in one bucket being filled to 10 items) before we have to expand the buckets the first time.

    When the buckets are expanded, we will now have 16 buckets. In normal operation our previous ≈47 items should be evenly distributed into those 16 buckets.

    The Attack

    Without a random hash seed it is easy for an attacker to generate strings which will result in the same hash. This devolves to O(n️2) time for the hash lookup. This O(n2) is actually O(string_length * num_collisions). When we have hash collisions, that means that no matter how many times we double the number of buckets we have, the strings which have hash collisions will always remain in the same bucket as each other. To locate the correct string, MoarVM must go down the chain and compare each hash value with the one we’re looking for. Since they are all the same, we must fall back to also checking each string itself manually until we find the correct string in that bucket.

    Hash collision

    This attack is done by creating a function that essentially is our hashing function backward (for those curious see here for an example of code which does forward and backward hashing for Chrome V8 engine’s former hashing function). We hash our target string, t. We then use random 3 character sequences (in our case graphemes) and plug them into our backward hashing function along with the hash for our target t. The backward hash and the random character sequence are stored in the dictionary and the process is repeated until we have a very large number of backward hash’s and random 3 grapheme prefixes.

    We can then use this dictionary to construct successively longer strings (or short if we so desire) which are the same hash as our target string t. This is a simplification of how the Meet-In-The-Middle attack works.

    This has been fixed in most programming languages (Python, Ruby, Perl), and several CVE’s have been issued over the years for this exploit (See CVE’s for PHP, OCaml, Perl, Ruby and Python).

    Assuming everything is fine for the next release I will also merge changes which introduce a stronger hashing function called SipHash. SipHash is meant to protect against an attacker discovering a hash secret remotely. While randomizing the seed makes this attack much harder, a determined attacker could discover the hash and if that is done they can easily perform a meet in the middle attack. SipHash was designed to solve the vulnerability of the hash function itself to meet-in-the-middle attacks. Both the randomization of the hash secret in addition with a non-vulnerable hashing function work work together to avert hash collision denial of service attacks.

    While the hash secret randomization will be out in Rakudo 2018.05, SipHash is planned to be introduced in Rakudo 2018.06.

    Randomness Source

    On Linux and Unix we prefer function calls rather than reading from /dev/urandom. There are some very important reasons for this.

    Relying on an external file existing is potentially problematic. If we are in a chroot and /dev is not mounted we will not have access to /dev/urandom. /dev/urandom is not special, it can be deleted by accident (or on purpose) or a sparse data file mounted in its place undetectable by programs. Trusting it simply because of its path is not ideal. Also, if we exhaust the maximum number of open file descriptors we will be unable to open /dev/urandom as well.

    System Functions

    On Windows we use the pCryptGenRandom which is provided by advapi32.dll since Windows XP.

    Linux, FreeBSD, OpenBSD and MacOS all use system provided random calls (if available) to get the data rather than having to open /dev/urandom. All these OS’s guarantee these calls to be non-blocking, though MacOS’s documentation does not comment on it. This is mostly important in very early userspace, which bit Python when a developer accidentally changed the randomness source causing systems which relied on very early Python scripts to stop booting due to waiting for randomness source to initialize.

    If the function doesn’t exist we fall back to using /dev/urandom. If opening or reading it fails, on BSD’s we will use the arc4random() function. In many BSD’s this is seeded from the system’s random entropy pool, providing us with a back up in case /dev/urandom doesn’t exist.

    On Linux we use the getrandom() system call which was added to kernel 3.17 instead of using the glibc wrapper since the glibc wrapper was added much later than to the kernel.

    On MacOS, Solaris and FreeBSD we use getrandom() while on OpenBSD we use getentropy().

    User Facing Changes

    From Rakudo Perl 6 2018.05, the order that keys are returned will be random between each perl6 instance.

    perl6 -e 'my %hash = <a 1 b 1 c 1 d 1 e 1 f 1>; say %hash.keys'
    (d f c a b e)
    perl6 -e 'my %hash = <a 1 b 1 c 1 d 1 e 1 f 1>; say %hash.keys'
    (e f a d c b)

    This will also effect iterating a hash without sorting: for %hash { }

    What Do I Have To Do?

    Users and module developers should make sure that they explicitly sort hashes and not rely on a specific order being constant. If you have a module, take a look at the code and see where you are iterating on a hash’s keys and whether or not the order of processing the hash’s keys affects the output of the program.

    # This should be okay since we are putting the hash into another hash, order
    # does not matter.
    for %hash.keys -> $key {
        %stuff{$key} = $i++;
    # This can potentially cause issues, depending on where `@stuff` is used.
    for %hash.keys -> $key {
        @stuff.push: $key;
    # This should be OK since we are using is-deeply and comparing a hash with another
    # hash
    is-deeply my-cool-hash-returning-function($input), %( foo => 'text', bar => 'text', baz => 'text');
    # Probably best to avoid using `is`. The `is` test function converts the input to a string before
    # checking for equality, but works since we stringify it in sorted order.
    is %hash,  %( foo => 'text', bar => 'text', baz => 'text');
    # NO. Keys are not guaranteed to be in the same order on each invocation
    is %hash.keys, <a b c d>;

    Module Developers

    Module developers should check out the git master of Rakudo, or if 2018.05 has been released, use that to run the tests of your module. Make sure to run the tests multiple times, ideally at least 10 times or use a loop:

    while prove -e 'perl6 -Ilib'; do true; done

    This loop will run again and again until it encounters a test failure, in which case it will stop.

    You must run your tests many times because the hash order will be different on each run. For hashes will a small number of items, it may not fail on every run. Make sure that you also look at the source to identify items that need fixing; don’t just rely on the test’s to tell you if you must make changes to your module.

    Further Reading

    Hardening Perl’s Hash Function, article by about changes Perl 5 has made to harden hashing.

    Weekly changes in and around Perl 6: 2018.20 Committed Through Time

    Published by liztormato on 2018-05-14T22:22:44

    JJ Merelo has published a Technical Report titled “Perl 6 documentation repository through time: contributions through commits” (PDF) in which he describes how contributions have been distributed throughout the repository history, and what kind of changes have been brought by the Perl Foundation grant and other events. One of the products of the Curating and improving Perl 6 documentation TPF grant (First Grant Report).

    Core Developments

    Blog Posts

    Meanwhile on Twitter

    Meanwhile on StackOverflow

    Meanwhile on FaceBook

    Meanwhile on perl6-users

    Perl 6 in comments

    Perl 6 Modules

    New Modules:

    Updated modules:

    Winding Down

    Check in again next week for more Perl 6 news!

    gfldex: Deconstructing Simple Grammars

    Published by gfldex on 2018-05-10T14:51:34

    Last year I wrote an egg timer that was parsing command line arguments similar to GNU sleep. I was happy with the stringent form of the parser as follows.

    my Seconds $to-wait = @timicles»\
        .split(/<number>/, :v)\
        .map(-> [$,Rat(Any) $count, Str(Any) $unit] --> Seconds { %unit-multipliers{$unit} * $count })\

    It does a few simple things and does them one after another. A grammar with an action class would be overkill. I wasn’t happy with using splits ability to return the needle with the parts. It certainly does not improve readability.

    After quite a few iterations (and stepping on a bug), I came up with a way to use Str.match instead. If I convert each Match-object into a Hash I can use deconstruction in a signature of a pointy block.

    my Seconds $to-wait = @timicles»\
        .match(/<number> <suffix>+/)».hash\ # the +-quatifier is a workaround
        .map(-> % ( Rat(Any) :$number, Str(Any) :$suffix ) { %unit-multipliers{$suffix} * $number })\

    Instead of using positionals I can use named arguments that correspond to the named regexes inside the match arguments.

    Even in such a small pice of code things fall into place. Hyper-method-calls get rid of simple loops. The well crafted buildin types allow signature deconstruction to actually work without loads of temporary variables. It’s almost as certain language designers where aiming to make a most elegant language. Rakudo Star Release 2018.04

    Published on 2018-05-07T00:00:00

    Death by Perl6: Oslo PTS 2018; Chains, tools, and reindeer

    Published by Tony O'Dell on 2018-05-02T16:05:56

    This year at PTS we picked up where we left off last year, talking about the build system, dependency chains, and walking to the next food spot. Last year we had developed perl6 Warthog. Warthog is a hash "collapser" that works on OS [distro|kernel] and environment variables. This allows you to do things like determine native dependencies (like libcrypt in linux vs libgcrypt on osx) in your META6.json. An example of that here. So, that was last year.

    This year I worked mainly on integrating that into zef. Nine worked on Distribution::Builder::MakeFromJSON and integrated support for that into zef. Ugexe also worked on that integration and improved the S22 support for depends allowing :from<native|bin> signalling native lib or bin requirements in the meta (and coralling/giving direction to nine and me)

    The other thing I worked on is ancillary to zef and something we hope to integrate in the future to increase the speed of determining and building dependencies. I'll call it zefpan for this post though it's not meant to be comparable to cpan. Zefpan is essentially a webservice that provides a dependency chain for YourModule:ver<>:api<>:auth<>, this allows zef to bypass parsing the entirety of the ecosystem JSON to determine what's needed for installing/building a module and just make an http query to get the dependency chain. It also allows us to do things like generate a "cool" dependency graph. Check this graph out for Cro:


    I'm keeping this short because I'm relatively sick but there will be more in the next couple of weeks discussing Warthog, the dependency graphing, and possibly zefpan as we hash things out.

    Death by Perl6: Perl Toolchain Summit 2018 - META6 dependency hello

    Published by Nick Logan on 2018-04-30T05:35:19

    I recently returned from the 2018 Perl Toolchain Summit that was held in Oslo, Norway. This event is important because it gathers many key people and tons of experience into a face-to-face forum where technical ideas can be delivered and received with an appropriate amount of empathy. Being in the same room as so many peers is motivating. As one point of interest: I've been friends with tony-o since grade school, but he lives on the other side of the US -- PTS is still when we get the most work done despite easily having the means to do virtual standups.

    I worked on Perl6 stuff ( although I enjoy coming just from working in Perl my entire professional career ), mostly collaborating nine and tony-o. They actually did most of the initial work and testing, so I'm glad they were able to attend. And while they were busy adding features, I was busy regressing CPAN::Uploader::Tiny -- luckily the author skaji was one of the attendees so the issue was quickly resolved.

    I can assure you we ate and drank well. The venue was great, the weather was warmer than I was used to, and of course lots of friendly people. But without further ado, ( some ) of the Perl6 results of PTS 2018:

    Allow structured dependency format

    Previously depends only took a string-only value akin to "Foo:ver<1>:auth<a@b.c>", but was improved to take a hash form as well: { "name" : "Foo", "ver", "1", "auth", "a@b.c" }. This is intended to allow more meta data in situations where a nested data structure is needed to express something.

    Add declarative syntax for depends leaf nodes

    The previous task ultimately led into this one, which adds some basic declarative conditional logic based on the syntax described in -- Ideally this allows more complex dependency trees that can still be graphed before downloading or building any modules.

    "depends": [
            "from": "native",
            "name": {
                "": {
                    "darwin": "libssl",
                    "linux" : "libssl",
                    "win32" : "ssleay32"

    Add basic :from<bin> and :from<native> support

    Support based on<bin> and<native>

    	"description" : "grep PATH for a matching 'curl' script",
    	"depends" : [ "curl:from<bin>"]
    	"description" : "load library by name $*VM.platform-library-name("curl".IO)",
    	"depends" : [ "curl:from<native>"]

    The implementation behind these is still quite naive, but for now hopefully serves useful for:

    Maybe one day we will figure out a way to resolve versions for native/bin dependencies, possibly via backends/plugins that access e.g. apt-get ( although this related more to the previously mentioned dependency hints ).

    Add support for builder key in META6.json

    If you wanted to do some pre-install logic ( such as running make ) you would put a file called in the root of your distribution, and do class Build { method build($cwd) { ... } }. The interface is not bad, but there are a couple of warts:

    1. .pm file extension instead of .pm6
    2. location in project root directory
    3. being a module thats importable under -I. ( not listed in META6.json ) or -Ilib.

    Now zef supports a "builder" field to delegate the previous logic to a custom namespace. See:

    Sponsors for the Perl Toolchain Summit 2018

    Thanks to the sponsors for enabling this important open source work!

    NUUG Foundation, Teknologihuset,, cPanel, FastMail, Elastic, ZipRecruiter, MaxMind, MongoDB, SureVoIP, Campus Explorer, Bytemark, Infinity Interactive, OpusVL, Eligo, Perl Services, Oetiker+Partner.

    Also thanks to anyone who encouraged people to bring Stroopwafels back home.

    Zoffix Znet: WANTED: Perl 6 Historical Items

    Published on 2018-04-16T00:00:00

    A call to collect memorabilia for a Perl 6 museum

    Perl 6 Inside Out: 🔬 75. my $x = $x in Perl 6

    Published by andrewshitov on 2018-04-10T08:51:58

    What happens if you’ll try to create a new variable and immediately initialise it by itself, as shown in the following test code:

    my $x = $x;

    This does not work (which is expected), but Perl 6 is so kind to the user  that it gives an error message prepared especially for this case:

    ===SORRY!=== Error while compiling:
    Cannot use variable $x in declaration to initialize itself
    ------> my $x = $⏏x;
      expecting any of:

    Let us find the place in the code where the error message is triggered. This case is captured in the Grammar of Perl 6, at the place where variable is parsed:

    token variable {
        . . .
        | <sigil>
          [ $<twigil>=['.^'] <desigilname=desigilmetaname>
            | <twigil>? <desigilname> ]
          [ <?{ !$*IN_DECL && $*VARIABLE && $*VARIABLE eq 
            $<sigil> ~ $<twigil> ~ $<desigilname> }>
                  self.typed_panic: 'X::Syntax::Variable::Initializer', 
                  name => $*VARIABLE
        . . .

    The condition to throw an exception is a bit wordy, but you can clearly see here that the whole variable name is checked, including both sigil and potential twigil.

    The exception itself is located in src/core/Exception.pm6 (notice that file extensions were changed from .pm to .pm6 recently), and it is used only for the above case:

    my class X::Syntax::Variable::Initializer does X::Syntax {
        has $.name = '<anon>';
        method message() {
            "Cannot use variable $!name in declaration to initialize itself"

    And that’s all for today. Rakudo Perl 6 sources can be really transparent sometimes! 🙂

    Perl 6 Inside Out: 🦋 74. Typed hashes in Perl 6

    Published by andrewshitov on 2018-04-08T09:35:41

    In Perl 6, you can restrict the content of a variable container by specifying its type, for example:

    my Int $i;

    There is only one value in a scalar variable. You can extend the concept to arrays and let its element to keep only integers, as it is done in the next example:

    > my Int @i;
    > @i.push(42);
    > @i.push('Hello');
    Type check failed in assignment to @i;
    expected Int but got Str ("Hello")
      in block <unit> at <unknown file> line 1

    Hashes keeps pairs, so you can specify the type of both keys and values. The syntax is not deductible from the above examples.

    First, let us announce the type of the value:

    my Str %s;

    Now, it is possible to have strings as values:

    > %s<Hello> = 'World'
    > %s<42> = 'Fourty-two'

    But it’s not possible to save integers:

    > %s<x> = 100
    Type check failed in assignment to %s;
    expected Str but got Int (100)
      in block <unit> at <unknown file> line 1

    (By the way, notice that in the case of %s<42> the key is a string.)

    To specify the type of the second dimension, namely, of the hash keys, give the type in curly braces:

    my %r{Rat};

    This variable is also referred to as object hash.

    Having this, Perl expects you to have Rat keys for this variable:

    > %r<22/7> = pi
    > %r
    {22/7 => 3.14159265358979}

    Attempts to use integers or strings, for example, fail:

    > %r<Hello> = 1
    Type check failed in binding to parameter 'key';
    expected Rat but got Str ("Hello")
      in block <unit> at <unknown file> line 1
    > %r{23} = 32
    Type check failed in binding to parameter 'key';
    expected Rat but got Int (23)
      in block <unit> at <unknown file> line 1

    Finally, you can specify the types of both keys and values:

    my Str %m{Int};

    This variable can be used for translating month number to month names but not vice versa:

    > %m{3} = 'March'
    > %m<March> = 3
    Type check failed in binding to parameter 'key';
    expected Int but got Str ("March")
      in block <unit> at <unknown file> line 1


    Perl 6 Inside Out: 🔬73. Keys, values, etc. of hashes in Perl 6

    Published by andrewshitov on 2018-04-07T09:46:26

    Today, we will take a look at a few methods of the Hash class that return all hash keys or values or both:

    > my %h = H => 'Hydrogen', He => 'Helium', Li => 'Lithium';
    {H => Hydrogen, He => Helium, Li => Lithium}
    > %h.keys;
    (H Li He)
    > %h.values;
    (Hydrogen Lithium Helium)
    > %h.kv;
    (H Hydrogen Li Lithium He Helium)

    While you may want to go directly to the src/core/Hash.pm6 file to see the definitions of the methods, you will not find them there. The Hash class is a child of Map, and all these methods are defined in src/core/Map.pm6. Getting keys and values is simple:

    multi method keys(Map:D:) {
    multi method values(Map:D:) {

    For the kv method, more work has to be done:

    multi method kv(Map:D:) { :: does Rakudo::Iterator::Mappy {
            has int $!on-value;
            method pull-one() is raw {
                . . .
            method skip-one() {
                . . .
            method push-all($target --> IterationEnd) {
                . . .

    As you see, the method returns a sequence that is built using an anonymous class implementing the Rakudo::Iterator::Mappy role. We already saw how this approach is used in combination with defining pull-one and push-all methods.

    Let us look at another set of methods, pairs and antipairs. One of them is simple and straightforward:

    multi method pairs(Map:D:) {

    Another one is using an intermediate class:

    multi method antipairs(Map:D:) { :: does Rakudo::Iterator::Mappy {
            method pull-one() {
                . . .
            method push-all($target --> IterationEnd) {
            . . .

    Both methods produce results of the same structure:

    > %h.antipairs
    (Hydrogen => H Lithium => Li Helium => He)
    > %h.pairs
    (H => Hydrogen Li => Lithium He => Helium)


    Perl 6 Inside Out: 🔬72. Superscripts in Perl 6

    Published by andrewshitov on 2018-04-05T08:33:58

    In Perl 6, you can use superscript indices to calculate powers of numbers, for example:

    > 2⁵
    > 7³

    It also works with more than one digit in the superscript:

    > 10¹²

    You can guess that the above cases are equivalent to the following:

    > 2**5
    > 7**3
    > 10**12

    But the question is: How on Earth does it work? Let us find it out.

    For the Numeric role, the following operation is defined:

    proto sub postfix:<ⁿ>(Mu $, Mu $) is pure {*}
    multi sub postfix:<ⁿ>(\a, \b) { a ** b }

    Aha, that is what we need, and the superscript notation is converted to the simple ** operator here.

    You can visualise what exactly is passed to the operation by printing the operands:

    multi sub postfix:<ⁿ>(\a, \b) { 
        nqp::say('# a = ' ~ a);
        nqp::say('# b = ' ~ b);
        a ** b 

    In this case, you’ll see the following output for the test examples above:

    > 2⁵
    # a = 2
    # b = 5
    > 10¹²
    # a = 10
    # b = 12

    Now, it is time to understand how the postfix that extracts superscripts works. Its name, , written in superscript, should not mislead you. This is not a magic trick of the parser, this is just a name of the symbol, and it can be found in the Grammar:

    token postfix:sym<ⁿ> {
        <sign=[⁻⁺¯]>? <dig=[⁰¹²³⁴⁵⁶⁷⁸⁹]>+ <O(|%autoincrement)>

    You see, this symbol is a sequence of superscripted digits with an optional sign before them. (Did you think of a sign before we reached this moment in the Grammar?)

    Let us try negative powers, by the way:

    > say 4⁻³
    # a = 4
    # b = -3

    Also notice that the whole construct is treated as a postfix operator. It can also be applied to variables, for example:

    > my $x = 9
    > say $x²
    # a = 9
    # b = 2

    So, a digit in superscript is not a part of the variable’s name.

    OK, the final part of the trilogy, the code in Actions, which parses the index:

    method postfix:sym<ⁿ>($/) {
        my $Int := $*W.find_symbol(['Int']);
        my $power := nqp::box_i(0, $Int);
        for $<dig> {
            $power := nqp::add_I(
               nqp::mul_I($power, nqp::box_i(10, $Int), $Int),
               nqp::box_i(nqp::index("⁰¹²³⁴⁵⁶⁷⁸⁹", $_), $Int),
        $power := nqp::neg_I($power, $Int) 
            if $<sign> eq '⁻' || $<sign> eq '¯';
        make<call>, :name('&postfix:<ⁿ>'), 
                          $*W.add_numeric_constant($/, 'Int', $power));

    As you can see here, it scans the digits and updates the $power variable by adding the value at the next decimal position (it is selected in the code above).

    The available characters are listed in a string, and to get its value, the offset in the string is used. The $<dig> match contains a digit, you can see it in the Grammar:



    Perl 6 Inside Out: 🔬71. Implementing Int.sleep() in Perl 6

    Published by andrewshitov on 2018-04-04T08:32:37

    Hello! Yesterday, I was giving my Perl 6 Intro course at the German Perl Workshop in Gummersbash. It was a great pleasure to prepare and run this one-day course, and, while it was difficult to cover everything, we touched all main aspects of the Perl 6 language: from variables to regexes and parallel computing. Of course, it was only a top-level overview, and there was not enough time to make all the exercises. You can do them at home, here’s the Perl 6 Intro – Exercises PDF file.

    Among the rest, we tried to implement the sleep method for integers. The rationale behind that is that it is possible to say:

    > 10.rand

    But not:

    > 10.sleep
    No such method 'sleep' for invocant of type 'Int'
      in block <unit> at <unknown file> line 1

    OK, so let’s first implement the simplest form of sleep for Ints only. Go to src/core/Int.pm6 and add the following:

    my class Int does Real {
        method sleep() {

    Here’s a photo from the screen:


    There is no declaration of the $!value attribute in this file, but we know that it can be found somewhere in Perl6/Metamodel/BOOTSTRAP.nqp:

    # class Int is Cool {
    # has bigint $!value is box_target;
    Int.HOW.add_parent(Int, Cool);
    Int.HOW.add_attribute(Int,<$!value>, :type(bigint), 
                          :box_target(1), :package(Int)));
    Int.HOW.set_boolification_mode(Int, 6);

    Compile and run. The desired code works now:

    > 3.sleep
    # sleeping 3 seconds

    What can be changed here? The first idea is to allow non-integer numbers as the delay duration. As Int does the Real role, just move the method to src/core/ and get the value using the Num method instead of reading $!value directly (there is no such attribute in the Real role):

    my role Real does Numeric {
        method sleep() { 

    Now it also works with rationals and floating-point numbers:

    > 2.sleep
    > 3.14.sleep
    > pi.sleep

    Before wrapping it up, let us take a look at the body of the sleep subroutine. It is defined in src/core/Date.pm6:

    proto sub sleep(|) {*}
    multi sub sleep(--> Nil) { sleep(*) }
    multi sub sleep($seconds --> Nil) {
        # 1e9 seconds is a large enough value that still makes VMs sleep
        # larger values cause nqp::sleep() to exit immediatelly (esp. on 32-bit)
        if nqp::istype($seconds,Whatever) || $seconds == Inf {
            nqp::sleep(1e9) while True;
        elsif $seconds > 1e9 {
            nqp::sleep($_) for gather {
                1e9.take xx ($seconds / 1e9);
                take $seconds - 1e9 * ($seconds / 1e9).Int;
        elsif $seconds > 0e0 {

    The code is very clear and does not need any comments.

    And maybe just to see why our modified Rakudo printed the time after sleep in the tests above, let’s refer to the documentation of NQP to see that its sleep function’s return value is the number of seconds:

    ## sleep
    * `sleep(num $seconds --> num)`
    Sleep for the given number of seconds (no guarantee is made
    how exact the time sleeping is spent.)
    Returns the passed in number.


    my Timotimo \this: Tangentially related work

    Published by Timo Paulssen on 2018-03-26T17:30:30

    Hi there, it's already been three weeks since my first blog post on my TPF grant work. In between then and now the nice folks over at Edument made public the work I've been doing on the side for a couple of months. Fortunately, my grant work benefits a whole lot from this as well. Being able to debug (set breakpoints, inspect variable contents, single-step execution) the profile dumping code in nqp (re-used in Rakudo), as well as the heap snapshot loading code in App::MoarVM::HeapAnalyzer::Model lets me more easily figure out why things might be wrong or even crashing.

    Since Edument's product is Cro, the reactive framework for implementing and consuming microservices, I use simple Cro applications as test subjects for the profilers, as well as the debugger.

    Photo by Erik-Jan Leusink / Unsplash

    Yesterday I took the first close look at the Cro app that powers the new profiler frontends, by running it under the heap profiler. I was rather surprised to find that even while no requests were being made, the heap snapshot file kept growing at a hundred megabytes every few seconds. Something was clearly amiss.

    To understand why this happens you must know that MoarVM will take a heap snapshot after every GC run. That means something must be causing the GC to run frequently even if no actual work is being done.

    Fortunately, I know that Rakudo's ThreadPoolScheduler has a built-in supervisor that has an eye on the performance of the thread pool. It runs on its own thread and wakes up a hundred times every second.

    My recent profiler work to make multi-threaded applications run properly under the regular profiler let me have a closer look at what was being allocated. Turns out a lot of objects related to iterating over a Range object were being created. A single function was using range iteration, but that accounted for a huge chunk of allocations. Looking at what functions allocate the different types of objects, you can see that pretty much 100% of all Scalar allocations were caused by iterating over a Range.

    The Instrumented Profiler shows a table of which routines allocate how many Objects of type Scalar
    Scalars may only hold 3 pointers in them on top of the common object header that's 16 bytes big, but it surely adds up! (measurement taken over a 60 second period)

    So just changing the for ^worker-list.elems into an equivalent loop loop got allocations down significantly.

    There was, of course, more left to do. The math we were doing to figure out how active the process has been since we last woke up (including a moving average) caused some boxing and unboxing. A call to a helper method was allocating empty hashes for named arguments every time (something we can often optimize away completely, but in this case we couldn't be sure that it'd be safe). And finally, the getrusage op was creating a fresh int array every time.

    I was initially reluctant to make the code less readable in the interest of performance. However, the supervisor allocating absolutely nothing was almost achieved. Lizmat inlined the getrusage-total helper sub that caused boxing and unboxing on every call, and I decided to inline the prod-affinity-workers helper method as well – this one was only used in a single place, anyway.

    The last piece of the puzzle was the getrusage op that allocated integer arrays every time it was called. To get the last drop of performance out of the supervisor thread, I changed the implementation of nqp::getrusage to take an already allocated array.

    After all this work, the ThreadPoolScheduler will not allocate anything at all if nothing is happening.

    I hope I have given you a little insight into what working on performance can look like.

    Now that I've shaved this yak, I can properly profile and analyze Cro applications and anything that runs tasks with the start keyword, like the backend of the heap analyzer!

    I hope to see you back on my blog when the next blog post hits the 'net!
      - Timo

    6guts: Remote Debug Support for MoarVM

    Published by jnthnwrthngtn on 2018-03-13T22:29:43

    Some years back, I worked on bringing basic debug support to Rakudo Perl 6. It works by taking the program AST and instrumenting it. This approach produced a somewhat useful result, but also carries a number of limitations:

    Enter the Edument Team

    At Edument, the company where I work, we’re keen to support Perl 6 and drive it forward. Last summer, we released Cro. In the autumn, we started work on our next Perl 6 product – which we’ll be announcing in just a couple more weeks. Along the way, we realized that it really needed Perl 6 to have a better debugging solution. So what did we do?

    We decided to pitch in and fund its development, of course! During the winter, we’ve worked on adding remote debug support to MoarVM, and today we’ve opened a pull request with it.

    Some details

    With our additions, MoarVM can be started with a flag indicating it should run a debug server, along with a port that it should listen on. It can then either wait for a connection before doing anything, or run the program as normal but allow for a connection to be made later.

    The debug protocol is defined in terms of MessagePack, which you can think of as being like a binary form of JSON. The PR includes documentation of the protocol, and we hope that having based it on an existing serialization format will make implementation of that easy for those who should wish to do so.

    The implemented features available through the protocol include:

    We’ve more plans for the future, but this is already a quite decent feature set that opens up a lot of possibilities.

    A Perl 6 library and basic debug CLI

    Along with the debug server implementation in MoarVM, we’re also releasing a Perl 6 module that speaks the debug protocol, along with a basic, but useful, command line application built using it. These aren’t things that we directly needed, but we hope the CLI will be useful (and, of course, contributions are welcome – I’m sure I’ll be patching it some), and that the library will pave the way for further interesting tools that might be built in terms of the debug protocol.

    In Closing

    All being well, the next MoarVM release will include remote debug support. With time, this will lead to much better debugging support for Perl 6, to keep pace with the growing size and concurrent nature of applications people are building in Perl 6 today. Enjoy!

    my Timotimo \this: Delays and Delights

    Published by Timo Paulssen on 2018-03-04T23:24:34

    Delays and Delights

    Hi, my name is timotimo and I'm a Perl 6 developer. I've set up this blog to write reports on my TPF Grant.

    Before the actual report starts, I'd like to issue an apology. In between my grant application and the grant being accepted I developed a bit of RSI that lasted for multiple months. I already anticipated that near the end of January a move would occupy me a bit, but I had no idea how stressful it would turn out to be, and how long afterwards it would keep me occupied.

    Delays and Delights
    Photo by Lyndsey B / Unsplash

    I regret that it took me so long to actually get started. However, I've already got little bits to show for the first report of this grant!

    Delight №1: Non-crashy multi-threaded profiles

    Until now if you've added a signal handler for Ctrl-C to your program, or used run, shell, or even Proc::Async or any async IO, rakudo or moarvm will have started an additional thread for you. Even if you don't necessarily realize it, this caused the profiler to either violently explode, or return useless or partial results.

    Just these past few days I've made a bit of reliability work for the profiler available to everyone running rakudo from the latest source code. Now the profiler will only sometimes abort with the error message "profiler lost sequence" – a bug that I'm also going to chase down as part of the grant. On top of that, the HTML profiler frontend now shows timing and allocations from every thread.

    Delight №2: Faster Heap Snapshots

    N.B.: I have done the work discussed in this section before the grant actually started, but I think it's still interesting for you.

    As you can tell from the – sadly mildly messed-up – grant application, I also work on a second profiler: The Heap Snapshot Profiler. It takes a snapshot of the structure of all objects in memory, and their connections. With this information, you can figure out what kinds of objects account for how much of your memory usage, and for what reason any given object is not being removed by the garbage collector.

    If you've already tried out this profiler in the past, you may have noticed that it incurs a significant memory usage increase during run-time. After that, it takes a surprising amount of time to load in the heap snapshot analyzer. This changed noticeably when I implemented a new format for heap snapshots.

    Until then, the format had one line (i.e. \n delimited) per snapshot (i.e. for each GC run) and a json-encoded list of strings up front. The snapshots themselves were then a few lines with lists of integers.
    This caused two little issues in terms of performance:

    The binary-based format I replaced it with addresses both of these issues, and has a few extra features for speed purposes:

    Armed with the index at the end of the file, the binary format can be read by multiple threads at the same time, and that's exactly what the new Heap Snapshot Analyzer will do. In addition to the start positions of each section, there's also a pointer right into the middle of the one section that holds variable-sized entries. That way both halves can be read at the same time and cheaply combined to form the full result.

    The "summary all" command loads every snapshot, measures the total size of recorded objects, how many objects there are, and how many connections exist. The result is displayed as a table. Running this against a 1.1 gigabyte big example snapshot with 44 snapshots takes about a minute, uses up 3⅓ CPU cores out of the 4 my machine has, and maxes out at 3 gigs of ram used.[2] That's about 19 megabytes per second. It's not blazing fast, but it's decent.

    Delight №3: Web Frontend

    Sadly, there isn't anything to show for this yet, but I've started a Cro application that lets the user load up a heap snapshot file and load individual snapshots in it.

    It doesn't seem like much at all, but the foundation on which the other bits will rest is laid now.

    I intend for the next posts to have a bunch of tiny animated gif screencasts for you to enjoy!

    Delight №4: Faster Profile File Writing

    The SQL output will likely be the primary data storage format after my grant is done. Therefore I've taken first steps to optimize outputting the data. Right now it already writes out data about 30% faster during the actual writing stage. The patch needs a bit of polish, but then everyone can enjoy it!

    Next steps

    The next thing I'd like to work towards is trying out the profiler with a diverse set of workloads: From hyper/race to a Cro application. That will give me a good idea of any pain points you'd encounter. For example, once the ThreadPoolScheduler's supervisor thread runs, it'll spend a majority of its time sleeping in between ticks. This shows up very prominently in the Routines list, to name one example.

    Of course, I take inspiration from my fellow Perl 6 developers and users on the IRC channel and the mailing lists, who offer a steady supply of challenges :)

    If you have questions, suggestions, or comments, please feel free to ping me on IRC, or find this blog post on reddit.

    Thanks for reading; see you in the next one!
      - Timo

    1. It could have been done in C code, but it feels weird to have JSON string escaping code built into MoarVM. ↩︎

    2. Figuring out why the ram usage is so high is also on my list. Amusingly, the heap analyzer can help me improve the heap analyzer! ↩︎

    brrt to the future: Some things I want

    Published by Bart Wiegmans on 2018-02-27T16:38:00

    Lately I've been fixing a few bugs here and there in the JIT compiler, as well as trying to advance the JIT expression optimizer. The story of those bugs is interesting but in this post I want to focus on something else, namely some support infrastructure that I think we should have that would make working on MoarVM and spesh/jit in particular much nicer.

    There are a bunch of things related to runtime control of spesh and the JIT:
    Then there's more ambitious stuff, that still falls under 'housekeeping':
    And there's more ambitious stuff that would fall under optimizations and general functionality improvements:
    There is definitively more out there I want to do, but this should be enough to keep me busy for a while. And if anyone else wants to take a try at any of these, they'd be very welcome to :-).

    Zoffix Znet: Perl 6: On Specs, Versioning, Changes, and… Breakage

    Published on 2018-02-20T00:00:00

    Details on how Perl 6 language changes. Rakudo Star Release 2018.01

    Published on 2018-01-29T00:00:00

    6guts: Of sisters, stacks, and CPAN

    Published by jnthnwrthngtn on 2018-01-26T21:27:47

    Recently, an open letter was published by Elizabeth Mattijsen, a long-time contributor to the Perl community who has in recent years contributed copiously to Rakudo Perl 6, as well as working to promote Perl (both 5 and 6) at numerous events. The letter made a number of points, some of them yielding decidedly unhappy responses. I’ve been asked a number of times by now for my take on the letter and, having had time to consider things, I’ve decided to write up my own thoughts on the issues at hand.

    Oh sister

    A number of years back, I played a part in forming the “sister language narrative”. The reality – then and now – is that both Perl 5 and Perl 6 have userbases invested in them and a team willing to develop the languages and their implementations. These userbases partly overlap, and partly consist of people with an interest in only one or the other.

    Following the Perl mantra that “there’s more than one way to do it”, I saw then – and see now – no reason for the advent of Perl 6 to impose an “end of life” on Perl 5. During the many years that Perl 6 took to converge on a stable language release with a production-ready implementation, Perl 5 continued to evolve to better serve its userbase. And again, I see no reason why it should not continue to do so. For one, Perl 6 is not a drop-in replacement for Perl 5, and it’s unreasonable to expect everyone using Perl 5 today to have a business case for migrating their existing systems to use Perl 6. When there’s a team eager to carry Perl 5 forwards, what’s to lose in continuing to work towards serving that Perl 5 userbase better? Caring about and supporting an existing userbase is a sign of a mature language development community. It shows that we’re not about “move fast and break stuff”, but “you can trust us with your stuff”.

    Moreover, it’s very much the case that Perl 5 and Perl 6 make different trade-offs. To pick one concrete example, Perl 6 makes it easy to run code across multiple threads, and even uses multiple threads internally (for example, performing optimization and JIT compilation on a background thread). Which is great…except the only winning move in a game involving both threads and fork() is not to play. Sometimes one just can’t have their cake and eat it, and if you’re wanting a language that more directly gives you your POSIX, that’s probably always going to be a strength of Perl 5 over Perl 6.

    For these reasons and more, it was clear to me that the best way forward was to simply accept, and rejoice in, the Perl community having multiple actively developed languages to offer the world. So where did the sister language narrative come in?

    The number 6 happens to be a larger number than 5, and this carries some implications. I guess at the outset of the Perl 6 project, it was indeed imagined that Perl 6 would be a successor to Perl 5. By now, it’s instead like – if you’ll excuse me a beer analogy – Rochefort 8 and Rochefort 10: both excellent beers, from the same brewery, who have no reason to stop producing the 8 simply because they produce the 10. I buy both, and they’re obviously related, though different, and of course I have my preference, but I’m glad they both exist.

    The point of the “sister language” narrative was to encourage those involved with Perl 5 and Perl 6 to acknowledge that both languages will continue to move forward, and to choose their words and actions so as to reflect this.

    I continue to support this narrative, both in a personal capacity, and as the Rakudo Perl 6 Pumpking. (For those curious, “pumpking” is a cute Perl-y word for “project leader”, although one could be forgiven for guessing it means “flatulent monarch”.) Therefore, I will continue to choose my words and actions so as to support it, unless a new consensus with equally broad community buy-in comes to replace it.

    I accept that this narrative may not be perfect for everyone, but it has been quite effective in encouraging those who feel strongly about just Perl 5 or just Perl 6 to focus their efforts constructively on building things, rather than trying to tear down the work of others. Therefore, it was no surprise to me that, when Liz’s open letter and follow-up comments went against that narrative, the consequences were anything but constructive.

    I can’t, and don’t feel I should try to, control the views of others who contribute to Perl 6. Within reason, a diversity of views is part of a healthy community. I do, however, have a few things to ask of everyone. Firstly, when expressing a view that is known not to have widespread community consensus, please go to some lengths to make it clear it is a personal position. Secondly, when reading somebody’s expressed position on a matter, don’t assume it is anything more than their position unless clearly stated. And – perhaps most importantly – please also go to lengths to discuss topics calmly and politely. I was deeply disappointed by the tone of many of the discussions I saw taking place over the last week. We can do better.

    Perl 5 on the Rakudo stack?

    The next section of the letter considered the possibility of a “Butterfly Perl 5” project: effectively, a port of Perl 5 to run atop of the Rakudo stack (in reality, this doesn’t involve Rakudo much at all, but rather NQP and MoarVM). As a compiler geek, this of course piques my interest, because what could be more fun that writing a compiler? :-) And before anyone gets excited – no, I don’t have the time to work on such a thing. But, in common with Liz, I’d be supportive of anyone wishing to experiment in that direction. There will be some deep challenges, and I’ll issue my usual warning that syntax is what gets all the attention but semantics are what will eat you alive.

    Where I will disagree, however, is on the idea of a moratorium on new features in Perl 5. These appear slowly anyway (not because Perl 5 Porters are inefficient, but because adding new features to such a widely used language with a 30-year-old runtime is just a really darn hard thing to be doing). Given the immense technical challenges a Perl 5-on-new-VM effort would already face, the odd new feature would be a drop in the bucket.

    My one piece of unsolicited advice to those working on Perl 5 would be to borrow features from Perl 6 very carefully, because they are generally predicated on a different type system and many other details that differ between the Perls. (My diagnosis of why smartmatch has presented such a challenge in Perl 5 is because it was designed assuming various aspects of the Perl 6 type system, which don’t map neatly to Perl 5. This isn’t surprising, given Perl 6 very consciously set out to do differently here.) But should Perl 5 simply stop exploring ways to make things better for is userbase? No, I don’t think so. From a Perl 5 user point of view (which is the only one I have), adding subroutine signatures – which deliver existing semantics with less boilerplate – feels like a sensible thing to do. And adding features to keep up with the needs of the latest versions of the Unicode specification is a no-brainer. “Perl (5 or 6) is great at Unicode” is a meme that helps us all.

    The CPAN butterfly plan

    The final part of the letter proposes a focused porting effort of Perl 5 modules to Perl 6. This idea has my support. While the Perl 6 module ecosystem has been growing – and that’s wonderful – there’s still a good number of things missing. Efforts to address that expands the range of tasks to which Perl 6 can be conveniently applied. Of course, one can reach such modules already with the excellent Inline::Perl5 too. Some folks have reservations about dual runtimes in production, however, and interop between two runtimes has some marshalling cost – although with recent optimization work, the cost has been brought well down from what it was.

    Of course, in some areas it’s strategic to build something afresh that takes into account the opportunities offered by Perl 6. That’s why I designed Cro by reading RFCs and considering the best way to implement them in Perl 6. Even then, I did it with the benefit of 20 years experience doing web stuff – without which, I suspect the result would have been rather less useful. Porting existing modules – taking time to tweak their APIs to feel more comfortable in Perl 6 – means that existing knowledge built up by the authors of those modules can be carried forward, which is surely a win.

    In closing

    I’ve had the privilege of working with Liz for a number of years. While her open letter makes a number of points I disagree with, I have no reason to believe it wasn’t written out of good intentions. Some people have wondered if any good can come of it. That’s up to us as a community. I think we can, at least, take away:

    And, last but not least, it has been clear that – while it has in the last days often been expressed in raw and heated ways – we, as the Perl community, have two languages we’re passionate about and are keen to drive forward. Let’s do that together, in peace, not in pieces.

    Zoffix Znet: Perl 6 Core Hacking: QASTalicious

    Published on 2018-01-26T00:00:00

    Overview of "Q" Abstract Syntax Trees + bug fix tutorial

    gfldex: Expensive Egg-Timers

    Published by gfldex on 2017-12-31T13:28:01

    If you use a CLI you might have done something along the line.

    sleep 1m 30s; do-the-next-thing

    I have a script called OK that will display a short text in a hopeful green and morse code O-K via the PC speaker. By doing so I turn my computer into an expensive egg-timer.

    As of late I found myself waiting for longer periods of time and was missing a count-down so I could estimate how much more time I can waste playing computer games. The result is a program called count-down.

    Since I wanted to mimic the behaviour of sleep as closely as possible I had a peek into its source-code. That made me realise how lucky I am to be allowed to use Perl 6. If I strip all the extra bits a count-down needs I’m at 33 lines of code compared to 154 lines of GNU sleep. The boilerplate I have is mostly for readability. Like defining a subset called Seconds and a Rexex called number.

    Errors in the arguments to the script will be cought by the where clause in MAINs signature. Since there are no further multi candidates for MAIN that might interfere, the usage message will be displayed automatically if arguments are not recognized. Pretty much all lines in the C implementation deal with argument handling and the fact that they can’t trust their arguments until the last bit of handling is done. With a proper signature a Perl 6 Routine can fully trust its arguments and no further error handling is needed. Compared to the C version (that does a lot less) the code can be read linear from top to bottom and is much more expressive. After changing a few identifiers I didn’t feel the need for comments anymore. Even some unclear code like the splitting on numbers and keeping the values, becomes clear in the next lines where I sum up a list of seconds.

    Now I can comfortably count down the rest of a year that was made much better by a made better Perl 6. I wish you all a happy 2018.

    Perl 6 Advent Calendar: Bonus Xmas – Concurrent HTTP Server implementation and the scripter’s approach

    Published by ramiroencinas on 2017-12-25T01:52:24

    First of all, I want to highlight Jonathan Worthington‘s work with Rakudo Perl6 and IO::Socket::Async. Thanks Jon!


    I like to make scripts; write well-organized sequences of actions, get results and do things with them.

    When I began with Perl6 I discovered a spectacular ecosystem, where I could put my ideas into practice in the way that I like: script manner. One of these ideas was to implement a small HTTP server to play with it. Looking at other projects and modules related to Perl6, HTTP and sockets I discovered that the authors behind were programmers with a great experience with Object-Oriented programming.

    Perl6 paradigms

    Perl6 supports the three most popular programming paradigms:

    I think that the Object-Oriented paradigm is fine when you design an application or service that will grow, will do many and varied things and will have many changes. But I don’t like things that grow too much and will have many changes; that’s why I like scripts, for its native procedural approach, because it promote simplicity and effectiveness quickly. I like small (step by step) things that do great things quickly.

    The Functional paradigm is awesome in my opinion; you can take a function and use it like a var, among other amazings things.

    Perl6 Supplies are like a V12 engine

    When I started with Perl6 shortly after I started the translation of to Spanish language. Looking at the documentation of Perl6 I discovered the great concurrent potential that Perl6 has. The concurrent aspect of Perl6 was more powerful than I thought.

    The idea I had of the HTTP server with Perl6 began with the Perl6 Supplies (Asynchronous data stream with multiple subscribers), specifically with the class IO::Socket::Async. All socket management, data transmission and concurrency is practically automatic and easy to understand. It was perfect for making and play with a small concurrent but powerful service.

    Based on the examples of the IO::Socket::Async documentation I started to implement a small HTTP server with pseudoCGI support in the mini-http-cgi-server project, and it worked as I expected. As I got what I wanted, I was satisfied and I left this project for a while. I didn’t like things to grow too much.

    But then, preparing a talk for the Madrid Perl Workshop 2017 (thanks to Madrid Perl Mongers and Barcelona Perl Mongers guys for the event support), I had enough motivation to do something more practical, something where web front-end coders could do their job well and communicate with the back-end where Perl6 is awaiting. On the one hand, the typical public html static structure, and on the other hand a Perl6 module including several webservices waiting for the web requests from the front-end guys.

    Then Wap6 was born (Web App Perl6).

    The Wap6 structure

    I like the structure for a web application that Wap6 implements:

    public folder contains the friendly front-end stuff, like static html, javascript, css, etc., that is, the front-end developer space. The webservices folder contains the back-end stuff: a Perl6 module including a function per webservice.

    This same folder level contains the solution entry point, a Perl6 script that, among other things like initialization server parameters, contains the mapping between routes and webservices:

    my %webservices =
      '/ws1' => ( &ws1, 'html' ),
      '/ws2' => ( &ws2, 'json' )

    As you can see, not only the routes are mapped to the corresponding webservice, but also specify the return content-type of the webservice (like HMTL or JSON). That is, you type http://domain/ws1 in the web browser and the ws1 function returns the response data with the corresponding content-type as we will see later.

    All the routes to the webservices are in %webservices hash and it is passed to the main funcion wap with other useful named params:

    wap(:$server-ip, :$server-port, :$default-html, :%webservices);

    The core of Wap6

    The wap funcion is located out side, in the core lib module that Wap6 use and contains the concurrent and elegant V12 engine:

    react {   
      whenever IO::Socket::Async.listen($server-ip,$server-port) -> $conn {
        whenever $conn.Supply(:bin) -> $buf {
          my $response = response(:$buf, :$current-dir, :$default-html, :%webservices);
          $conn.write: $response.encode('UTF-8');

    This is a threes (react – whenever – IO::Socket::Async) reactive, concurrent and asynchronous context. When a transmission arrives from the web client ($conn), it is placed in a new Supply $buf of bin type ($conn.Suply(:bin)), and $buf with other things like the %webservices hash are sent to the response function that runs the HTTP logic. Finally, the return from the response function is written back to the web client.

    The response function (located out side, in the core lib too) contains the HTTP parser stuff: it splits the incoming data (the HTTP entity) into headers and body, it performs validations, it takes basic HTTP header information like the method (GET or POST) and the URI (Uniform Resource Identifier), it determines if the requested resource is a webservice (from the webservices folder) or static file (from the public folder), get the data from the resource (from static file or webservice) and returns back to wap function to write the response to the web client, as we have seen before.

    The Webservices

    The response function, validates $buf and extract the HTTP method from the request header that can be GET or POST (I don’t think that in the future it will support more HTTP methods). Case of GET method it puts the URL params (if any) into $get-params. Case of POST method, it puts the body request into $body.

    Then it’s time to check if the web client has requested a webservice. $get-params includes the URI and is extracted with the URI module, finally the result is placed in $path:

    given $path {
      when %webservices{"$_"}:exists {
        my ( &ws, $direct-type ) = %webservices{"$_"};
        my $type = content-type(:$direct-type);
        return response-headers(200, $type) ~ &ws(:$get-params, :$body);

    If $path exists in the %webservices hash, the client wants a webservice. Then it extracts the corresponding webservice callable function &ws from %webservices hash (yes, I also love Functional paradigm :-) ) and the correspondig content-type. Then it calls the webservice function &ws with the $get-params and the request $body parameters. Finally it returns the HTTP response entity that concatenates:

    The callable webservice &ws can be ws1, located in the Perl6 module from webservices folder:

    sub ws1 ( :$get-params, :$body ) is export {
      if $get-params { return 'From ws1: ' ~ $get-params; }
      if $body { return 'From ws1: ' ~ $body; }

    In this demo context the webservice simply returns the input, that is, the $get-params (when GET) or the $body (when POST).

    When the client request a static file

    After discarding all the other possibilities, if the client request a static file hosted in the public folder, like html, js, css, etc, then:

    given $path {
      default {
        my $filepath = "$current-dir/public/$path";
        my $type = content-type(:$filepath);
        return response-headers(200, $type) ~ slurp "$current-dir/public/$path";

    It returns the response headers including the matched content-type and the requested file contents with slurp.

    And that’s all folks! a concurrent web server in the script-procedural manner: Wap6.


    I’m happy with the results of Wap6. I don’t pretend that it grows a lot, but I’m always tempted to continue adding more features: SSL support (completed!), session management (in progress), cookies, file uploads, etc.

    Perl6 has put on the table a very powerful way to perform concurrent network operations: IO::Socket::Async, a masterpiece. Also, with Perl6 you can mix the Object-Oriented, Procedural and Functional paradigms as you wish. With these capabilities you can design a concurrent asynchronous service and implement it quickly.

    If you want something more serious approach with HTTP services and concurrency in the Perl6 ecosystem, take a look at Cro, it represents a great opportunity to establish Perl6 as a powerful entity in the HTTP services space. Jonathan Worthington wrote about it last 9th on this same Advent Calendar.

    Meanwhile, I will continue playing with Wap6, in the script manner, contributing with the Perl6 ecosystem and learning from the bests coders in the world, I mean: Perl and Perl6 coders, of course :-)

    Perl 6 Advent Calendar: Day 24 – Solving a Rubik’s Cube

    Published by coke on 2017-12-24T00:33:51


    I have a speed cube on my wish list for Christmas, and I'm really excited about it. :) I wanted to share that enthusiasm with some Perl 6 code.

    I graduated from high school in '89, so I'm just the right age to have had a Rubik's cube through my formative teen years. I remember trying to show off on the bus and getting my time down to just under a minute. I got a booklet from a local toy store back in the 80s that showed an algorithm on how to solve the cube, which I memorized. I don't have the booklet anymore. I've kept at it over the years, but never at a competitive level.

    In the past few months, YouTube has suggested a few cube videos to me based on my interest in the standupmaths channel; seeing the world record come in under 5 seconds makes my old time of a minute seem ridiculously slow.

    Everyone I've spoken to who can solve the cube has been using a different algorithm than I learned, and the one discussed on standupmaths is yet a different one. The advanced version of this one seems to be commonly used by those who are regularly setting world records, though.

    Picking up this algorithm was not too hard; I found several videos, especially one describing how to solve the last layer. After doing this for a few days, I transcribed the steps to a few notes showing the list of steps, and the crucial parts for each step: desired orientation, followed by the individual turns for that step. I was then able to refer to a single page of my notebook instead of a 30-minute video, and after a few more days, had memorized the steps: being able to go from the notation to just doing the moves is a big speed up.

    After a week, I was able to solve it reliably using the new method in under two minutes; a step back, but not bad for a week's effort in my off hours. Since then (a few weeks now), I've gotten down to under 1:20 pretty consistently. Again, this is the beginner method, without any advanced techniques, and I'm at the point where I can do the individual algorithm steps without looking at the cube. (I still have a long way to go to be competitive though.)


    A quick note about the notation for moves – given that you're holding the cube with a side on the top, and one side facing you, the relative sides are:

    L (Left) R (Right) U (Up) D (Down) F (Front) B (Back)

    If you see a lone letter in the steps, like B, that means to turn that face clockwise (relative to the center of the cube, not you). If you add a ʼ to the letter, that means counter clockwise, so would have the top piece coming down, while a R would have the bottom piece coming up.

    Additionally, you might have to turn a slice twice, which is written as U2; (Doesn't matter if it's clockwise or not, since it's 180º from the starting point.)


    The beginner's algorithm I'm working with has the following basic steps:

    1. White cross 2. White corners 3. Second layer 4. Yellow cross 5. Yellow edges 6. Yellow corners 7. Orient yellow corners

    If you're curious as to what the individual steps are in each, you'll be able to dig through the Rubik's wiki or the YouTube video linked above. More advanced versions of this algorithm (CFOP by Jessica Fridrich) allow you to combine steps, have specific "shortcuts" to deal with certain cube states, or solve any color as the first side, not just white.

    Designing a Module

    As I began working on the module, I knew I wanted to get to a point where I could show the required positions for each step in a way that was natural to someone familiar with the algorithm, and to have the individual steps also be natural, something like:


    I also wanted to be able to dump the existing state of the cube; For now as text, but eventually being able to tie it into a visual representation as well,

    We need to be able to tell if the cube is solved; We need to be able to inspect pieces relative to the current orientation, and be able to change our orientation.

    Since I was going to start with the ability to render the state of the cube, and then quickly add the ability to turn sides, I picked an internal structure that made that fairly easy.

    The Code

    The latest version of the module is available on github. The code presented here is from the initial version.

    Perl 6 lets you create Enumerations so you can use actual words in your code instead of lookup values, so let's start with some we'll need:

    enum Side «:Up('U') :Down('D') :Front('F') :Back('B') :Left('L') :Right('R')»;
    enum Colors «:Red('R') :Green('G') :Blue('B') :Yellow('Y') :White('W') :Orange('O')»;

    With this syntax, we can use Up directly in our code, and its associated value is U.

    We want a class so we can store attributes and have methods, so our class definition has:

    class Cube::Three {
    has %!Sides;
    submethod BUILD() {
    %!Sides{Up} = [White xx 9];
    %!Sides{Front} = [Red xx 9];

    We have a single attribute, a Hash called %.Sides; Each key corresponds to one of the Enum sides. The value is a 9-element array of Colors. Each element on the array corresponds to a position on the cube. With white on top and red in front as the default, the colors and cell positions are shown here with the numbers & colors. (White is Up, Red is Front)

             W0 W1 W2
             W3 W4 W5
             W6 W7 W8
    G2 G5 G8 R2 R5 R8 B2 B5 B8 O2 O5 O8
    G1 G4 G7 R1 R4 R7 B1 B4 B7 O1 O4 O7
    G0 G3 G6 R0 R3 R6 B0 B3 B6 B0 B3 B6
             Y0 Y1 Y2
             Y3 Y4 Y5
             Y6 Y7 Y8

    The first methods I added were to do clockwise turns of each face.

    method F {
    self!fixup-sides([, [6,7,8]),, [2,1,0]),, [2,1,0]),, [6,7,8]),

    This public method calls two private methods (denoted with the !); one rotates a single Side clockwise, and the second takes a list of Pairs, where the key is a Side, and the value is a list of positions. If you imagine rotating the top of the cube clockwise, you can see that the positions are being swapped from one to the next.

    Note that we return self from the method; this allows us to chain the method calls as we wanted in the original design.

    The clockwise rotation of a single side shows a raw Side being passed, and uses array slicing to change the order of the pieces in place.

    # 0 1 2 6 3 0
    # 3 4 5 -> 7 4 1
    # 6 7 8 8 5 2
    method !rotate-clockwise(Side \side) {
    %!Sides{side}[0,1,2,3,5,6,7,8] = %!Sides{side}[6,3,0,7,1,8,5,2];

    To add the rest of the notation for the moves, we add some simple wrapper methods:

    method F2 { self.F.F; }
    method Fʼ { self.F.F.F; }

    F2 just calls the move twice; Fʼ cheats: 3 rights make a left.

    At this point, I had to make sure that my turns were doing what they were supposed to, so I added a gist method (which is called when an object is output with say).

          W Y W
          Y W Y
          W Y W
    G B G R O R B G B O R O
    B G B O R O G B G R O R
    G B G R O R B G B O R O
          Y W Y
          W Y W
          Y W Y

    The source for the gist is:

    method gist {
    my $result;
    $result = %!Sides{Up}.rotor(3).join("\n").indent(6);
    $result ~= "\n";
    for 2,1,0 -> $row {
    for (Left, Front, Right, Back) -> $side {
    my @slice = (0,3,6) >>+>> $row;
    $result ~= ~%!Sides{$side}[@slice].join(' ') ~ ' ';
    $result ~= "\n";
    $result ~= %!Sides{Down}.rotor(3).join("\n").indent(6);

    A few things to note:

    The gist is great for stepwise inspection, but for debugging, we need something a little more compact:

    method dump {
    gather for (Up, Front, Right, Back, Left, Down) -> $side {
    take %!Sides{$side}.join('');

    This iterates over the sides in a specific order, and then uses the gather take syntax to collect string representations of each side, then joining them all together with a |. Now we can write tests like:

    use Test; use Cube::Three;
    my $a =;
    is $a.R.U2...R....U2.L.U..U.L.dump,
    'corners rotation';

    This is actually the method used in the final step of the algorithm. With this debug output, I can take a pristine cube, do the moves myself, and then quickly transcribe the resulting cube state into a string for testing.

    While the computer doesn't necessarily need to rotate the cube, it will make it easier to follow the algorithm directly if we can rotate the cube, so we add one for each of the six possible turns, e.g.:

    method rotate-F-U {
    # In addition to moving the side data, have to
    # re-orient the indices to match the new side.
    my $temp = %!Sides{Up};
    %!Sides{Up} = %!Sides{Front};
    %!Sides{Front} = %!Sides{Down};
    %!Sides{Down} = %!Sides{Back};
    %!Sides{Back} = $temp;

    As we turn the cube from Front to Up, we rotate the Left and Right sides in place. Because the orientation of the cells changes as we change faces, as we copy the cells from face to face, we also may have to rotate them to insure they end up facing in the correct direction. As before, we return self to allow for method chaining.

    As we start testing, we need to make sure that we can tell when the cube is solved; we don't care about the orientation of the cube, so we verify that the center color matches all the other colors on the face:

    method solved {
    for (Up, Down, Left, Right, Back, Front) -> $side {
    return False unless
    %!Sides{$side}.all eq %!Sides{$side}[4];
    return True;

    For every side, we use a Junction of all the colors on a side to compare to the center cell (always position 4). We fail early, and then succeed only if we made it through all the sides.

    Next I added a way to scramble the cube, so we can consider implementing a solve method.

    method scramble {
    my @random = <U D F R B L>.roll(100).squish[^10];
    for @random -> $method {
    my $actual = $method ~ ("", "2", "ʼ").pick(1);

    This takes the six base method names, picks a bunch of random values, then squishes them (insures that there are no dupes in a row), and then picks the first 10 values. We then potentially add on a 2 or a ʼ. Finally, we use the indirect method syntax to call the individual methods by name.

    Finally, I'm ready to start solving! And this is where things got complicated. The first steps of the beginner method are often described as intuitive. Which means it's easy to explain… but not so easy to code. So, spoiler alert, as of the publish time of this article, only the first step of the solve is complete. For the full algorithm for the first step, check out the linked github site.

    method solve {
    method solve-top-cross {
    sub completed {
    %!Sides{Up}[1,3,5,7].all eq 'W' &&
    %!Sides{Front}[5] eq 'R' &&
    %!Sides{Right}[5] eq 'B' &&
    %!Sides{Back}[5] eq 'O' &&
    %!Sides{Left}[5] eq 'G';
    while !completed() {
    # Move white-edged pieces in second row up to top
    # Move incorrectly placed pieces in the top row to the middle
    # Move pieces from the bottom to the top

    Note the very specific checks to see if we're done; we use a lexical sub to wrap up the complexity – and while we have a fairly internal check here, we see that we might want to abstract this to a point where we can say "is this edge piece in the right orientation". To start with, however, we'll stick with the individual cells.

    The guts of solve-top-cross are 100+ lines long at the moment, so I won't go through all the steps. Here's the "easy" section

    my @middle-edges =
    [Front, Right],
    [Right, Back],
    [Back, Left],
    [Left, Front],
    for @middle-edges -> $edge {
    my $side7 = $edge[0];
    my $side1 = $edge[1];
    my $color7 = %!Sides{$side7}[7];
    my $color1 = %!Sides{$side1}[1];
    if $color7 eq 'W' {
    # find number of times we need to rotate the top:
    my $turns = (
    @ordered-sides.first($side1, :k) -
    @ordered-sides.first(%expected-sides{~$color1}, :k)
    ) % 4;
    self.U for 1..$turns;
    self.for 1..$turns;
    next MAIN;
    } elsif $color1 eq 'W' {
    my $turns = (
    @ordered-sides.first($side7, :k) -
    @ordered-sides.first(%expected-sides{~$color7}, :k)
    ) % 4;
    self.for 1..$turns;
    self.U for 1..$turns;
    next MAIN;

    When doing this section on a real cube, you'd rotate the cube without regard to the side pieces, and just get the cross in place. To make the algorithm a little more "friendly", we keep the centers in position for this; we rotate the Up side into place, then rotate the individual side into place on the top, then rotate the Up side back into the original place.

    One of the interesting bits of code here is the .first(..., :k) syntax, which says to find the first element that matches, and then return the position of the match. We can then look things up in an ordered list so we can calculate the relative positions of two sides.

    Note that the solving method only calls to the public methods to turn the cube; While we use raw introspection to get the cube state, we only use "legal" moves to do the solving.

    With the full version of this method, we now solve the white cross with this program:

    #!/usr/bin/env perl6
    use Cube::Three;
    my $cube =;
    say $cube;
    say '';
    say $cube;

    which generates this output given this set of moves (Fʼ L2 B2 L Rʼ Uʼ R Fʼ D2 B2). First is the scramble, and then is the version with the white cross solved.

          W G G
          Y W W
          Y Y Y
    O O B R R R G B O Y Y B
    R G O B R R G B G W O B
    Y B B R O W G G G W W O
          W W O
          Y Y O
          B R R
          Y W W
          W W W
          G W R
    O G W O R Y B B G R O G
    Y G G R R B R B Y R O G
    O O R Y O W O O R W Y B
          G G B
          B Y Y
          Y B B

    This sample prints out the moves used to do the scramble, shows the scrambled cube, "solves" the puzzle (which, as of this writing, is just the white cross), and then prints out the new state of the cube.

    Note that as we get further along, the steps become less "intuitive", and, in my estimation, much easier to code. For example, the last step requires checking the orientationof four pieces, rotating the cube if necessary, and then doing a 14-step set of moves. (shown in the test above).

    Hopefully my love of cubing and Perl 6 have you looking forward to your next project!

    I'll note in the comments when the module's solve is finished, for future readers.

    Perl 6 Advent Calendar: Day 23 – The Wonders of Perl 6 Golf

    Published by AlexDaniel on 2017-12-23T00:00:05

    Ah, Christmas! What could possibly be better than sitting around the table with your friends and family and playing code golf! … Wait, what?

    Oh, right, it’s not Christmas yet. But you probably want to prepare yourself for it anyway!

    If you haven’t noticed already, there’s a great website for playing code golf: The cool thing about it is that it’s not just for perl 6! At the time of writing, 6 other langs are supported. Hmmm…

    Anyway, as I’ve got some nice scores there, I thought I’d share some of the nicest bits from my solutions. All the trickety-hackety, unicode-cheatery and mind-blowety. While we are at it, maybe we’ll even see that perl 6 is quite concise and readable even in code golf. That is, if you have a hard time putting your Christmas wishes on a card, maybe a line of perl 6 code will do.

    I won’t give full solutions to not spoil your Christmas fun, but I’ll give enough hints for you to come up with competitive solutions.

    All I want for Christmas is for you to have some fun. So get yourself rakudo to make sure you can follow along. Later we’ll have some pumpkin pie and we’ll do some caroling. If you have any problems running perl 6, perhaps join #perl6 channel on freenode to get some help. That being said, itself gives you a nice editor to write and eval your code, so there should be no problem.

    Some basic examples

    Let’s take Pascal’s Triangle task as an example. I hear ya, I hear! Math before Christmas, that’s cruel. Cruel, but necessary.

    There’s just one basic trick you have to know. If you take any row from the Pascal’s Triangle, shift it by one element and zip-sum the result with the original row, you’ll get the next row!

    So if you had a row like:

    1 3 3 1

    All you do is just shift it to the right:

    0 1 3 3 1

    And sum it with the original row:

    1 3 3 1
    + + + +
    0 1 3 3 1
    1 4 6 4 1

    As simple as that! So let’s write that in code:

    for ^16 { put (+combinations($^row,$_) for 0..$row) }

    You see! Easy!

    … oh… Wait, that’s a completely different solution. OK, let’s see:

    .put for 1, { |$_,0 Z+ 0,|$_ } … 16


    1 1
    1 2 1
    1 3 3 1
    1 4 6 4 1
    1 5 10 10 5 1
    1 6 15 20 15 6 1
    1 7 21 35 35 21 7 1
    1 8 28 56 70 56 28 8 1
    1 9 36 84 126 126 84 36 9 1
    1 10 45 120 210 252 210 120 45 10 1
    1 11 55 165 330 462 462 330 165 55 11 1
    1 12 66 220 495 792 924 792 495 220 66 12 1
    1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
    1 14 91 364 1001 2002 3003 3432 3003 2002 1001 364 91 14 1
    1 15 105 455 1365 3003 5005 6435 6435 5005 3003 1365 455 105 15 1

    Ah-ha! There we go. So what happened there? Well, in perl 6 you can create sequences with a very simple syntax: 2, 4, 8 … ∞. Normally you’ll let it figure out the sequence by itself, but you can also provide a code block to calculate the values. This is awesome! In other languages you’d often need to have a loop with a state variable, and here it does all that for you! This feature alone probably needs an article or 𝍪.

    The rest is just a for loop and a put call. The only trick here is to understand that it is working with lists, so when you specify the endpoint for the sequence, it is actually checking for the number of elements. Also, you need to flatten the list with |.

    If you remove whitespace and apply all tricks mentioned in this article, this should get you to 26 characters. That’s rather competitive.

    Similarly, other tasks often have rather straightforward solutions. For example, for Evil Numbers you can write something like this:

    .base(2).comb(~1) %% 2 && .say for ^50

    Remove some whitespace, apply some tricks, and you’ll be almost there.

    Let’s take another example: Pangram Grep. Here we can use set operators:

    a..z .lc.comb && .say for @*ARGS

    Basically, almost all perl 6 solutions look like real code. It’s the extra -1 character oomph that demands extra eye pain, but you didn’t come here to listen about conciseness, right? It’s time to get dirty.


    Let’s talk numbers! 1 ² ③ ٤ ⅴ ߆… *cough*. You see, in perl 6 any numeric character (that has a corresponding numeric value property) can be used in the source code. The feature was intended to allow us to have some goodies like ½ and other neat things, but this means that instead of writing 50 you can write . Some golfing platforms will count the number of bytes when encoded in UTF-8, so it may seem like you’re not winning anything. But what about 1000000000000 and 𖭡? In any case, is unicode-aware, so the length of any of these characters will be 1.

    So you may wonder, which numbers can you write in that manner? There you go:

    -0.5 0.00625 0.025 0.0375 0.05 0.0625 0.083333 0.1
    0.111111 0.125 0.142857 0.15 0.166667 0.1875 0.2
    0.25 0.333333 0.375 0.4 0.416667 0.5 0.583333 0.6
    0.625 0.666667 0.75 0.8 0.833333 0.875 0.916667 1
    1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 10
    11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
    28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
    45 46 47 48 49 50 60 70 80 90 100 200 300 400 500
    600 700 800 900 1000 2000 3000 4000 5000 6000 7000
    8000 9000 10000 20000 30000 40000 50000 60000 70000
    80000 90000 100000 200000 216000 300000 400000
    432000 500000 600000 700000 800000 900000 1000000
    100000000 10000000000 1000000000000

    This means, for example, that in some cases you can save 1 character when you need to negate the result. There are many ways you can use this, and I’ll only mention one particular case. The rest you figure out yourself, as well as how to find the actual character that can be used for any particular value (hint: loop all 0x10FFFF characters and check their .univals).

    For example, when golfing you want to get rid of unnecessary whitespace, so maybe you’ll want to write something like:

    say 5max3 # ERROR

    It does not work, of course, and we can’t really blame the compiler for not untangling that mess. However, check this out:

    saymax# OUTPUT: «5␤»

    Woohoo! This will work in many other cases.


    If there is a good golfing language, that’s not Perl 6. I mean, just look at this:

    puts 10<30?1:2 # ruby
    say 10 <30??1!!2 # perl 6

    Not only TWO more characters are needed for the ternary, but also some obligatory whitespace around < operator! What’s wrong with them, right? How dare they design a language with no code golf in mind⁉

    Well, there are some ways we can work around it. One of them is operator chaining. For example:

    say 5>3>say(42)

    If 5 is ≤ than 3, then there’s no need to do the other comparison, so it won’t run it. This way we can save at least one character. On a slightly related note, remember that junctions may also come in handy:

    say yes! if 5==3|5

    And of course, don’t forget about unicode operators: , , .

    Typing is hard, let’s use some of the predefined strings!

    You wouldn’t believe how useful this is sometimes. Want to print the names of all chess pieces? OK:

    say (.uniname».words»[2]

    This saves just a few characters, but there are cases when it can halve the size of your solution. But don’t stop there, think of error messages, method names, etc. What else can you salvage?

    Base 16? Base 36? Nah, Base 0x10FFFF!

    One of the tasks tells us to print φ to the first 1000 decimal places. Well, that’s very easy!

    say 1.6180339887498948482045868343656381177203091798057628621354486227052604628189024497072072041893911374847540880753868917521266338622235369317931800607667263544333890865959395829056383226613199282902678806752087668925017116962070322210432162695486262963136144381497587012203408058879544547492461856953648644492410443207713449470495658467885098743394422125448770664780915884607499887124007652170575179788341662562494075890697040002812104276217711177780531531714101170466659914669798731761356006708748071013179523689427521948435305678300228785699782977834784587822891109762500302696156170025046433824377648610283831268330372429267526311653392473167111211588186385133162038400522216579128667529465490681131715993432359734949850904094762132229810172610705961164562990981629055520852479035240602017279974717534277759277862561943208275051312181562855122248093947123414517022373580577278616008688382952304592647878017889921990270776903895321968198615143780314997411069260886742962267575605231727775203536139362


    Okay, that takes a bit more than 1000 characters… Of course, we can try to calculate it, but that is not exactly in the Christmas spirit. We want to cheat.

    If we look at the docs about polymod, there’s a little hint:

    my @digits-in-base37 = 9123607.polymod(37 xx *); # Base conversion

    Hmmm… so that gives us digits for any arbitrary base. How high can we go? Well, it depends on what form we would like to store the number in. Given that counts codepoints, we can use base 0x10FFFF (i.e. using all available codepoints). Or, in this case we will go with base 0x10FFFE, because:

    ☠☠☠⚠⚠⚠ WARNING! WARNING! WARNING! ⚠⚠⚠☠☠☠
    ☠☠☠⚠⚠⚠ WARNING! WARNING! WARNING! ⚠⚠⚠☠☠☠

    When applied to our constant, it should give something like this:


    How do we reverse the operation? During one of the squashathons I found a ticket about a feature that I didn’t know about previously. Basically, the ticket says that Rakudo is doing stuff that it shouldn’t, which is of course something we will abuse next time. But for now we’re within the limits of relative sanity:

    say 1.,:1114110[o򲔐𦔏򄠔񟯶󐚉񯓦򝼤񋩟󅾜󖾩񆔈򡔙򝤉񎗎񕧣񡉽󎖪󽡂􂳚񖨸򆀍􋵔󴈂𨬎򭕴򢑬񛉿򰏷𰑕󜆵򾩴ந񘚡𐂇򘮇񢻳𺐅࿹𪏸񄙍򞏡򈘏󬥝𫍡𱀉򌝓򭀢񤄓􋯱󜋝񟡥𖏕񖾷򇋹🼟򠍍񿷦𧽘嗟󬯞񿡥𸖉񿒣򄉼󣲦󉦩󸾧󎓜𦅂񰃦񲍚􍰍𧮁񦲋򶟫𰌡򡒶䨀𗋨𛰑򾎹򄨠󑓮򁇐𵪶𱫞񱛦󿥐򌯎񾖾򳴪򕩃󧨑𥵑򦬽񡇈򌰘񿸶񿜾寡򔴩񻊺񛕄񌍌󶪼􁇘񶡁󃢖򗔝񽑖򮀓󘓥󼿶󢽈򰯬끝󡯮磪󂛕򩻛񲽤򊥍􆃂뎛𘝞򊕆𝧒񰕺𭙪򺗝󲝂󊹛𺬛𛒕񿢖󵹱󮃞󟝐񱷳􋻩𿞸񫵗򣥨򚘣򶝠򯫞󌋩򑠒򅳒𔇆񘦵򌠐𢕍򡀋𪱷𢍟񗈼򙯬񨚑񙦅󘶸󜹕򷒋񤍠󻁾.ords]

    Note that the string has to be in reverse. Other than that it looks very nice. 192 characters including the decoder.

    This isn’t a great idea for printing constants that are otherwise computable, but given the length of the decoder and relatively dense packing rate of the data, this comes handy in other tasks.

    All good things must come to an end; horrible things – more so

    That’s about it for the article. For more code golf tips I’ve started this repository:

    Hoping to see you around on! Whether using perl 6 or not, I’d love to see all of my submissions beaten.


    Perl 6 Advent Calendar: Day 22 – Features of Perl 6.d

    Published by liztormato on 2017-12-22T00:00:49

    So there we are. Two years after the first official release of Rakudo Perl 6. Or 6.c to be more precise. Since Matt Oates already touched on the performance improvements since then, Santa thought to counterpoint this with a description of the new features for 6.d that have been implemented since then. Because there have been many, Santa had to make a selection.

    Tweaking objects at creation

    Any class that you create can now have a TWEAK method. This method will be called after all other initializations of a new instance of the class have been done, just before it is being returned by .new. A simple, bit contrived example in which a class A has one attribute, of which the default value is 42, but which should change the value if the default is specified at object creation:

    class A {
        has $.value = 42;
        method TWEAK(:$value = 0) { # default prevents warning
            # change the attribute if the default value is specified
            $!value = 666 if $value == $!value;
    # no value specified, it gets the default attribute value
    dd;              # => 42)
    # value specified, but it is not the default
    dd => 77); # => 77)
    # value specified, and it is the default 
    dd => 42); # => 666)

    Concurrency Improvements

    The concurrency features of Rakudo Perl 6 saw many improvements under the hood. Some of these were exposed as new features. Most prominent are Lock::Async (a non-blocking lock that returns a Promise) and atomic operators.

    In most cases, you will not need to use these directly, but it is probably good that you know about atomic operators if you’re engaged in writing programs that use concurrency features. An often occurring logic error, especially if you’ve been using threads in Pumpking Perl 5, is that there is no implicit locking on shared variables in Rakudo Perl 6. For example:

       my int $a;
        await (^5).map: {
            start { ++$a for ^100000 }
        say $a; # something like 419318

    So why doesn’t that show 500000? The reason for this is that we had 5 threads that were incrementing the same variable at the same time. And since incrementing consists of a read step, an increment step and write step, it became very easy for one thread to do the read step at the same time as another thread. And thus losing an increment. Before we had atomic operators, the correct way of doing the above code would be:

       my int $a;
        my $l =;
        await (^5).map: {
           start {
               for ^100000 {
                   $l.protect( { ++$a } )
        say $a; # 500000

    This would give you the correct answer, but would be at least 20x as slow.

    Now that we have atomic variables, the above code becomes:

       my atomicint $a;
        await (^5).map: {
            start { ++⚛$a for ^100000 }
        say $a; # 500000

    Which is very much like the original (incorrect) code. And this is at least 6x as fast as the correct code using Lock.protect.

    Unicode goodies

    So many, so many. For instance, you can now use , , as Unicode versions of <=, >= and != (complete list).

    You can now also create a grapheme by specifying the Unicode name of the grapheme, e.g.:

    say "BUTTERFLY".parse-names; # 🦋

    or create the Unicode name string at runtime:

    print "$t-$_".parse-names for 3..6; # 👍🏼👍🏽👍🏾👍🏿

    Or collate instead of just sort:

    # sort by codepoint value
    say <ä a o ö>.sort; # (a o ä ö)
    # sort using Unicode Collation Algorithm
    say <ä a o ö>.collate; # (a ä o ö)

    Or use unicmp instead of cmp:

    say "a" cmp "Z"; # More
     say "a" unicmp "Z"; # Less

    Or that you can now use any Unicode digits Match variables ( for $1), negative numbers ( for -1), and radix bases (:۳("22") for :3("22")).

    It’s not for nothing that Santa considers Rakudo Perl 6 to have the best Unicode support of any programming language in the world!

    Skipping values

    You can now call .skip on Seq and Supply to skip a number of values that were being produced. Together with .head and .tail this gives you ample manipulexity with Iterables and Supplies.

    By the way, .head now also takes a WhateverCode so you can indicate you want all values except the last N (e.g. .head(*-3) would give you all values except the last three). The same goes for .tail (e.g. .tail(*-3) would give you all values except the first three).

    Some additions to the Iterator role make it possible for iterators to support the .skip functionality even better. If an iterator can be more efficient in skipping a value than to actually produce it, it should implement the skip-one method. Derived from this are the skip-at-least and skip-at-least-pull-one methods that can be provided by an iterator.

    An example of the usage of .skip to find out the 1000th prime number:

    say (^Inf).grep(*.is-prime)[999]; # 7919


    say (^Inf).grep(*.is-prime).skip(999).head; # 7919

    The latter is slightly more CPU efficient, but more importantly much more memory efficient, as it doesn’t need to keep the first 999 prime numbers in memory.

    Of Bufs and Blobs

    Buf has become much more like an Array, as it now supports .push, .append, .pop, .unshift, .prepend, .shift and .splice. It also has become more like Str with the addition of a subbuf-rw (analogous with .substr-rw), e.g.:

    my $b =;
    $b.subbuf-rw(2,3) =^5);
    say $b.perl; #,101,0,1,2,3,4,105)

    You can now also .allocate a Buf or Blob with a given number of elements and a pattern. Or change the size of a Buf with .reallocate:

    my $b = Buf.allocate(10,(1,2,3));
    say $b.perl; #,2,3,1,2,3,1,2,3,1)
    say $b.perl; #,2,3,1,2)

    Testing, Testing, Testing!

    The plan subroutine of now also takes an optional :skip-all parameter to indicate that all tests in the file should be skipped. Or you can call bail-out to abort the test run marking it as failed. Or set the PERL6_TEST_DIE_ON_FAIL environment variable to a true value to indicate you want the test to end as soon as the first test has failed.

    What’s Going On

    You can now introspect the number of CPU cores in your computer by calling Kernel.cpu-cores. The amount of CPU used since the start of the program is available in Kernel.cpu-usage, while you can easily check the name of the Operating System with VM.osname.

    And as if that is not enough, there is a new Telemetry module which you need to load when needed, just like the Test module. The Telemetry module provides a number of primitives that you can use directly, such as:

    use Telemetry;
    say T<wallclock cpu max-rss>; # (138771 280670 82360)

    This shows the number of microseconds since the start of the program, the number of microseconds of CPU used, and the number of Kilobytes of memory that were in use at the time of call.

    If you want get to a report of what has been going on in your program, you can use snap and have a report appear when your program is done. For instance:

    use Telemetry;
    Nil for ^10000000;  # something that takes a bit of time

    The result will appear on STDERR:

    Telemetry Report of Process #60076
    Number of Snapshots: 2
    Initial/Final Size: 82596 / 83832 Kbytes
    Total Time:           0.55 seconds
    Total CPU Usage:      0.56 seconds
    No supervisor thread has been running
    wallclock  util%  max-rss
       549639  12.72     1236
    --------- ------ --------
       549639  12.72     1236
    wallclock  Number of microseconds elapsed
        util%  Percentage of CPU utilization (0..100%)
      max-rss  Maximum resident set size (in Kbytes)

    If you want a state of your program every .1 of a second, you can use the snapper:

    use Telemetry;
    Nil for ^10000000;  # something that takes a bit of time

    The result:

    Telemetry Report of Process #60722
    Number of Snapshots: 7
    Initial/Final Size: 87324 / 87484 Kbytes
    Total Time:           0.56 seconds
    Total CPU Usage:      0.57 seconds
    No supervisor thread has been running
    wallclock  util%  max-rss
       103969  13.21      152
       101175  12.48
       101155  12.48
       104097  12.51
       105242  12.51
        44225  12.51        8
    --------- ------ --------
       559863  12.63      160
    wallclock  Number of microseconds elapsed
        util%  Percentage of CPU utilization (0..100%)
      max-rss  Maximum resident set size (in Kbytes)

    And many more options are available here, such as getting the output in .csv format.

    The MAIN thing

    You can now modify the way MAIN parameters are handled by setting options in %*SUB-MAIN-OPTS. The default USAGE message is now available inside the MAIN as the $*USAGE dynamic variable, so you can change it if you want to.

    Embedding Perl 6

    Two new features make embedding Rakudo Perl 6 easier to handle:
    the &*EXIT dynamic variable now can be set to specify the action to be taken when exit() is called.

    Setting the environment variable RAKUDO_EXCEPTIONS_HANDLER to "JSON" will throw Exceptions in JSON, rather than text, e.g.:

    $ RAKUDO_EXCEPTIONS_HANDLER=JSON perl6 -e '42 = 666'
      "X::Assignment::RO" : {
        "value" : 42,
        "message" : "Cannot modify an immutable Int (42)"

    Bottom of the Gift Bag

    While rummaging through the still quite full gift bag, Santa found the following smaller prezzies:

    Time to catch a Sleigh

    Santa would like to stay around to tell you more about what’s been added, but there simply is not enough time to do that. If you really want to keep up-to-date on new features, you should check out the Additions sections in the ChangeLog that is updated with each Rakudo compiler release.

    So, catch you again next year!

    Best wishes from


    Perl 6 Advent Calendar: Day 21 – Sudoku with Junctions and Sets

    Published by scimon on 2017-12-21T00:00:31

    There are a number of core elements in Perl6 that give you powerful tools to do things in a concise and powerful way. Two of these are Junctions and Sets which share a number of characteristics but are also wildly different. In order to demonstrate the power of these I’m going to look at how they can be used with a simple problem, Sudoku puzzles.

    Sudoku : A refresher

    So for those of you who don’t know a Sudoku puzzle is a 9 by 9 grid that comes supplied with some cells filled in with numbers between 1 and 9. The goal is to fill in all the cells with numbers between 1 and 9 so that no row, column or sub square has more than one of any of the numbers in it.

    There’s a few ways to represent a Sudoku puzzle, my personal favourite being a 9 by 9 nested array for example :

    my @game = [

    In this situation the cells with no value assigned are given a 0, this way all the cells have an Integer value assigned to them. The main thing to bear in mind with this format is you need to reference cells using @game[$y][$x] rather than @game[$x][$y].

    Junctions : Quantum Logic Testing

    One of the simplest ways to use Junctions in Perl6 is in a logical test. The Junction can represent a selection of values you are wanting to test against. For example :

    if ( 5 < 1|10 < 2 ) { say "Spooky" } else { say "Boo" }

    So, not only does this demonstrate operator chaining (something that experienced programmers may already be looking confused about) but the any Junction ( 1|10 ) evaluates to True for both 5 < 10 and 1 < 2. In this way Junctions can be extremely powerful already, it’s when you assign a variable container to them that it gets really interesting.

    One of the tests we’d like to be able to make on our Sudoku puzzle is to see if it’s full. By which I mean every cell has been assigned a value greater than 0. A full puzzle may not be completed correctly but there’s a guess in each cell. Another way of putting that would be that none of the cells has a value of 0. Thus we can define a Junction and store it in a scalar variable we can test it at any point to see if the puzzle is full.

    my $full-test = none( (^9 X ^9).map(-> ($x,$y) { 
    } ) );
    say so $full-test == 0;

    In this case the game has a number of 0’s still in it so seeing if $full-test equals 0 evaluates to False. Note that without the so to cast the result to a Boolean you’ll get a breakdown of the cells that are equal to 0 only if all of these are False will the Junction evaluate to True.

    Note also the use of the ^9 and X operators to generate two Ranges from 0 to 8 and then the cross product of these two lists of 9 characters to make a list of all the possible X,Y co-ordinates of the puzzle. It’s this kind of powerful simplicity that is one of the reasons I love Perl6. But I digress.

    The strength of this method is that once you’ve defined the Junction you don’t need to modify it. If you change the values stored in the Array the Junction will look at the new values instead (note this only holds true for updating individual cells, if you swap out a whole sub array with a new one you’ll break the Junction).

    So that’s a simple use of a Junction so store a multi-variable test you can reuse. But it gets more interesting when you realise that the values in a Junction can themselves be Junctions.

    Lets look at a more complex test, a puzzle is complete if for every row, column and square in the puzzle there is only one of each number. In order to make this test we’re going to need three helper functions.

    subset Index of Int where 0 <= * <= 8; 
    sub row( Index $y ) {
        return (^9).map( { ( $_, $y ) } ); 
    sub col( Index $x ) {
         return (^9).map( { ( $x, $_ ) } ); 
    multi sub square( Index $sq ) {
        my $x = $sq % 3 * 3;
        my $y = $sq div 3 * 3;
        return self.square( $x, $y );
    multi sub square( Index $x, Index $y ) {
         my $tx = $x div 3 * 3;
         my $ty = $y div 3 * 3;
         return ( (0,1,2) X (0,1,2) ).map( -> ( $dx, $dy ) { 
            ( $tx + $dx, $ty + $dy ) 
        } );

    So here we define an Index as a value between 0 and 8 and then define our sub‘s to return a List of List‘s with the sub lists being a pair of X and Y indices’s. Note that our square function can accept one or two positional arguments. In the single argument we define the sub squares with 0 being in the top left then going left to right with 8 being the bottom right. The two argument version gives use the list of cells in the square for a given cell (including itself).

    So with these in place we can define our one() lists for each row, column and square. Once we have them we can them put them into an all() junction.

    my $complete-all = all(
                    one( row( $_ ).map( -> ( $x, $y ) { 
                    } ) ),
                    one( col( $_ ).map( -> ( $x, $y ) { 
                    } ) ),
                    one( square( $_ ).map( -> ( $x, $y ) { 
                    } ) )

    Once we have that testing to see if the puzzle is complete is quite simple.

    say [&&] (1..9).map( so $complete-all == * );

    Here we test each possible cell value of 1 through 9 against the Junction, in each case this will be True if all the one() Junctions contains only one of the value. Then we use the [] reduction meta-operator to chain these results to give a final True / False value (True if all the results are True and False otherwise). Again this test can be reused as you add values to the cells and will only return True when the puzzle has been completed and is correct.

    Once again we’ve got a complex test boiled down to a single line of code. Our $complete-all variable needs to be defined once and is then valid for the rest of the session.

    This sort of nested junction tests can reach many levels, a final example is if we want to test if a current puzzle is valid. By which I mean it’s not complete but it doesn’t have any duplicate numbers in and row, column or square. Once again we can make a Junction for this, for each row, column or square it’s valid if one or none of the cells is set to each of the possible values.  Thus our creation of the Junction is similar to the $complete-all one.

    $valid-all = all(
                        none( row( $_ ).map( -> ( $x, $y ) {
                        } ) ),
                        one( row( $_ ).map( -> ( $x, $y ) {
                        } ) ) 
                        none( col( $_ ).map( -> ( $x, $y ) {
                        } ) ),
                        one( col( $_ ).map( -> ( $x, $y ) { 
                        } ) ) 
                        none( square( $_ ).map( -> ( $x, $y ) {
                        } ) ),
                        one( square( $_ ).map( -> ( $x, $y ) {
                        } ) ) 

    The test for validity is basically the same as the test for completeness.

    say [&&] (1..9).map( so $valid-all == * );

    Except in this case our puzzle is valid and so we get a True result.

    Sets : Collections of Objects

    Whilst the Junctions are useful to test values they aren’t as useful if we want to try solving the puzzle. But Perl6 has another type of collection that can come in very handy. Sets, (and their related types Bags and Mixes) let you collect items and then apply mathematical set operations to them to find how different Sets interact with each other.

    As an example we’ll define a possible function  that returns the values that are possible for a given cell. If the cell has a value set we will return the empty list.

    sub possible( Index $x, Index $y, @game ) {
        return () if @game[$y][$x] > 0;
                ( row($y).map( -> ( $x, $y ) { 
                } ).grep( * > 0 ) ),
                ( col($x).map( -> ( $x, $y ) { 
                } ).grep( * > 0 ) ),
                ( square($x,$y).map( -> ( $x, $y ) { 
                } ).grep( * > 0 ) )

    Here we find the different between the numbers 1 through 9 and the Set made up of the values of the row, column and square the given cell is in. We ignore cells with a 0 value using grep. As Sets store their details as unordered key / value pairs we get the keys and then sort them for consistency. Note that here we’re using the ascii (-) version of the operator, we could also use the Unicode version instead.

    We could define the set as the union of each of the results from row, col and square and the result would be the same. Also we’re using the two argument version of square in this case.

    It should be noted that this is the simplest definition of possible values, there’s no additional logic going on but even this simple result lets us do the simplest of solving algorithms. If this case we loop around every cell in the grid and if it’s got 1 possible value we can set the value to that. In this case we’ll loop round, get a list of cells to set, then loop through the list and set the values. If the list of ones to set is empty or the puzzle is complete then we stop.

    my @updates;
    repeat {
        @updates = (^9 X ^9).map( -> ($x,$y) { 
            ($x,$y) => possible($x,$y,@game) 
        } ).grep( *.value.elems == 1 );
        for @updates -> $pair { 
            my ( $x, $y ) = $pair.key; 
            @game[$y][$x] = $pair.value[0];
    } while ( @updates.elems > 0 && 
              ! [&&] (1..9).map( so $complete-all == * ) );

    So we make a list of Pairs where the key is the x,y coordinates and the value is the possible values. Then we remove all those that don’t have one value. This is continued until there are no cells found with a single possible value or the puzzle is complete.

    Another way of finding solutions is to get values that only appear in one set of possibilities in a given, row, column or square. For example if we have the following possibilities:


    1 and 5 only appear in the row once each. We can make use of the symmetric set difference operator and operator chaining to get this.

    say (1,2,3) (^) (2,3,4) (^) () (^) () (^) (4,5) (^) () (^) () (^) (2,3,4) (^) ()
    set(1 5)

    Of course in that case we can use the reduction meta-operator on the list instead.

    say [(^)] (1,2,3),(2,3,4),(),(),(4,5),(),(),(2,3,4),()
    set(1 5)

    So in that case the algorithm is simple (in this case I’ll just cover rows, the column and square code is basically the same).

    my @updates;
    for ^9 -> $idx {
        my $only = [(^)] row($idx).map( -> ( $x,$y ) { 
        } );
        for $only.keys -> $val {
            for row($idx) -> ($x,$y) {
                if $val (elem) possible($x,$y,@game) {
                    @updates.push( ($x,$y) => $val );

    We then can loop through the updates array similar to above. Combining these two algorithms can solve a large number of Sudoku puzzle by themselves and simplify others.

    Note we have to make two passes, firstly we get the numbers we’re looking for and then we have to look through each row and find where the number appears. For this we use the (elem) operator. Sets can also be referenced using Associative references for example:

    say set(1,5){1}

    A note on Objects

    So for all the examples so far I’ve used basic integers. But there’s nothing stopping you using Objects in your Junctions and Sets. There are a few things to bear in mind though, Sets use the === identity operator for their tests. Most objects will fail an identity check unless you have cloned them or have defined the WHICH method in a way that will allow them to be compared.

    For the Sudoku puzzle you may want to create a CellValue class that stores whether the number was one of the initial values in the puzzle. If you do this though you’ll need to override WHICH and make it return the Integer value of the Cell. As long as you are fine with an identity check being technically invalid in this case (two different CellValues may have the same value but the won’t be the same object) then you can put them in Sets.

    I hope you’ve found this interesting, Junctions and Sets are two of the many different parts of Perl6 that give you power to do complex tasks simply. If you’re interested in the code here there’s a Object based version available to use you can install with :

    zef install Game::Sudoku

    Strangely Consistent: Has it been three years?

    Published by Carl Mäsak

    007, the toy language, is turning three today. Whoa.

    On its one-year anniversary, I wrote a blog post to chronicle it. It seriously doesn't feel like two years since I wrote that post.

    On and off, in between long stretches of just being a parent, I come back to 007 and work intensely on it. I can't remember ever keeping a side project alive for three years before. (Later note: Referring to the language here, not my son.) So there is that.

    So in a weird way, even though the language is not as far along as I would expect it to be after three years, I'm also positively surprised that it still exists and is active after three years!

    In the previous blog post, I proudly announce that "We're gearing up to an (internal) v1.0.0 release". Well, we're still gearing up for v1.0.0, and we are closer to it. The details are in the roadmap, which has become much more detailed since then.

    Noteworthy things that happened in these past two years:

    Things that I'm looking forward to right now:

    I tried to write those in increasing order of difficulty.

    All in all, I'm quite eager to one day burst into #perl6 or #perl6-dev and actually showcase examples where macros quite clearly do useful, non-trivial things. 007 is and has always been about producing such examples, and making them run in a real (if toy) environment.

    And while we're not quite over that hump yet, we're perceptibly closer than we were two years ago.

    Belated addendum: Thanks and hugs to sergot++, to vendethiel++, to raiph++ and eritain++, for sharing the journey with me so far.

    6guts: A unified and improved Supply concurrency model

    Published by jnthnwrthngtn on 2017-11-24T00:54:34

    Perl 6 encourages the use of high-level constructs when writing concurrent programs, rather than dealing with threads and locks directly. These not only aid the programmer in producing more correct and understandable programs, but they also afford those of us working on Perl 6 implementation the opportunity to improve the mechanisms beneath the constructs over time. Recently, I wrote about the new thread pool scheduler, and how improving it could bring existing programs lower memory usage, better performance, and the chance to scale better on machines with higher core counts.

    In this post, I’ll be discussing the concurrency model beneath Supply, which is the Perl 6 mechanism for representing an asynchronous stream of values. The values may be packets arriving over a socket, output from a spawned process or SSH command, GUI events, file change notifications, ticks of a timer, signals, and plenty more besides. Giving all of these a common interface makes it easier to write programs that compose multiple asynchronous data sources, for example, a GUI application that needs to update itself when files change on disk, or a web server that should shut down cleanly upon a signal.

    Until recently, there were actually two different concurrency models, one for supply blocks and one for all of the methods available on a Supply. Few people knew that, and fewer still had a grasp of what that meant. Unfortunately, neither model worked well with the Perl 6.d non-blocking await. Additionally, some developers using supply/react/whenever in their programs ran into a few things that they had expected would Just Work, which in reality did not.

    Before digging in to the details, I’d like to take a moment to thank for providing the funding that allowed me to dedicate a good bit of time to this task. It’s one of the trickiest things I’ve worked on in a while, and having a good chunk of uninterrupted time to focus on it was really helpful. So, thanks!

    Supplies and concurrency

    The first thing to understand about Supply, and supply blocks, is that they are a tool for concurrency control. The power of supply blocks (react also) is that, no matter how many sources of asynchronous data you tap using whenever blocks, you can be sure that only one incoming message will be processed at a time. The same principle operates with the various methods: if I Supply.merge($s1, $s2).tap(&some-code), then I know that even if $s1 or $s2 were to emit values concurrently, they will be pushed onward one at a time, and thus I can be confident that &some-code will be called with a value at a time.

    These one-message-at-a-time semantics exist to enable safe manipulation of state. Any lexical variables declared within a supply block will exist per time the supply block is tapped, and can be used safely inside of it. Furthermore, it’s far easier to write code that processes asynchronous messages when one can be sure the processing code for a given message will run to completion before the next message is processed.


    Another interesting problem for any system processing asynchronous messages is that of backpressure. In short, how do we make a source of messages emit them at a rate no greater than that of the processing logic? The general principle with Supply is that the sender of a message pays the cost of its processing. So, if I have $, then whatever emits at the source pays the cost of the map of that message along with the processing done by the tap callback.

    The principle is easy to state, but harder to deliver on. One of the trickiest questions resolves around recursion: what happens when a Supply ends up taking an action that results in it sending a message to itself? That may sound contrived, but it can happen very easily. When the body of a supply block runs, the whenever blocks trigger tapping of a supply. If the tapped Supply was to synchronously emit a message, we immediately have a problem: we can’t process it now, because that would violate the one-at-a-time rule. A real world example where this happens? A HTTP/2 client, where the frame serializer immediately emits the connection preface when tapped, to make sure it gets sent before anything else can be sent. (Notice how this in itself also relies on the non-interruption principle.) This example comes straight out of Cro’s HTTP/2 implementation.

    A further issue is how we apply the backpressure. Do we block a real thread? Or can we do better? If we go doing real blocking of thread pool threads, we’ll risk exhausting the pool at worst, or in the better case force the program to create more threads (and so use more memory) than it really needs.

    Where things stood

    So, how did we do on these areas before my recent work? Not especially great, it turned out.

    First, let’s consider the mechanism that was used for everything except the case of supply blocks. Supply processing methods generally check that their input Supply is serial – that is, delivered one message at a time – by calling its serial method. If not, they serialize it. (In fact, they all call the serialize method, which just returns identity if serial is True, thus factoring out the check). The upshot of this is that we only have to enforce the concurrency control once in a chain of operations that can’t introduce concurrency themselves. That’s good, and has been retained during my changes.

    So, the interesting part is how serialize enforces one-at-a-time semantics. Prior to my recent work, it did this using a Lock. This has a few decent properties: locks are pretty well optimized, and they block a thread in an OS-aware way, meaning that the OS scheduler knows not to bother scheduling the waiting thread until the lock is released. They also have some less good properties. One is that using Lock blocks the use of Perl 6.d non-blocking await in any downstream code (a held Lock can’t migrate between threads), which was a major motivator to look for an alternative solution. Even if that were not an issue, the use of Lock really blocks up a thread, meaning that it will not be available for the thread pool to use for anything else. Last, but certainly not least, Lock is a reentrant mutex – meaning that we could end up violating the principle that a message is completely processed before the next message is considered in some cases!

    For supply blocks, a different approach had been taken. The supply block (or react block) instance had a queue of messages to process. Adding to, or taking from, this queue was protected by a Lock, but that was only held in order to update the queue. Messages were not removed from the queue until they had been processed, which in turn provided a way of knowing if the block instance is “busy”. If a message was pushed to the block instance when it was busy, then it was put onto the queue…and that is all. So who paid for the message processing?

    It turns out, it was handled by the thread that was busy in the supply block at the time that message arrived! This is pretty sensible if the message was a result of recursion. However, it could lead to violation of the principle that the sender of a message should pay for its processing costs. An unlucky sender could end up paying the cost of an unbounded number of messages of other senders! Interestingly enough, there haven’t been any bug reports about this, perhaps because most workloads simply don’t hit this unfairness, and those that do aren’t impacted by it anyway. Many asynchronous messages arrive on the thread pool, and it’s probably not going to cause much trouble if one thread ends up working away at a particular supply block instance that is being very actively used. It’s a thread pool, and some thread there will have to do the work anyway. The unfairness could even be argued to be good for memory caches!

    Those arguments don’t justify the problems of the previous design, however. Queues are pretty OK at smoothing out peaks in workloads, but the stable states of a queue are being full and being empty, and this was an unbounded queue, so “full” would mean “out of memory”. Furthermore, there was no way to signal back towards a producer that it was producing too fast.

    Towards a unified model: Lock::Async

    So, how to do better? I knew I wanted a unified concurrency control mechanism to use for both serialize and supply/react blocks. It had to work well with non-blocking await in Perl 6.d. In fact, it needed to – in the case a message could not be processed now, and when the sender was on the thread pool – do exactly what non-blocking await does: suspend the sender by taking a continuation, and schedule that to be run when the message it was trying to send could be sent. Only in the case that the sender is not a pool thread should it really block. Furthermore, it should be fair: message senders should queue up in order to have their message processed. On top of that, it needed to be efficient in the common case, which is when there is no contention.

    In response to these needs, I built Lock::Async: an asynchronous locking mechanism. But what is an asynchronous lock? It has a lock method which returns a Promise. If nothing else is holding the lock, then the lock is marked as taken (this check-and-acquire operation is implemented efficiently using an atomic operation) and the Promise that is returned will already be Kept. Otherwise, the Promise that is returned will be Kept when the lock becomes available. This means it can be used in conjunction with await. And – here’s the bit that makes this particularly useful – it means that it can use the same infrastructure built for non-blocking await in Perl 6.d. Thus, an attempt to acquire an asynchronous lock that is busy on the thread pool will result in that piece of work being suspended, and the thread can be used for something else. As usual, in a non-pool thread, real blocking (involving a condition variable) will take place, meaning those who need to be totally in control of what thread they’re running on, and so use Thread directly, will maintain that ability. (Examples of when that might be needed include writing bindings using NativeCall.)

    When the unlock method is called, then there are two cases. The first is if nobody contended for the lock in the meantime: in this case, then another atomic operation can be used to mark it as free again. Otherwise, the Promise of the first waiter in line is kept. This mechanism provides fairness: the lock is handed out to senders in the order that they requested it.

    Thus, using Lock::Async for concurrency control of supplies gives us:

    As an aside: as part of this I spent some time thinking about the semantics of await inside of a supply or react block. Should it block processing of other messages delivered to the block? I concluded that yes, it should: it provides a way to apply backpressure (for example, await of a write to a socket), and also means that await isn’t an exception to the “one message processed at a time, and processed fully” design principle. It’s not like getting the other behavior is hard: just use a nested whenever.

    Taps that send messages

    So, I put use of Lock::Async in place and all was…sort of well, but only sort of. Something like this:

    my $s = supply {
        for ^10 -> $i {
            emit $i
    react {
        whenever $s {

    Would hang. Why? Because the lock protecting the react block was obtained to run its mainline, setting up the subscription to $s. The setup is treated just like processing a message: it should run to completion before any other message is processed. Being able to rely on that is important for both correctness and fairness. The supply $s, however, wants to synchronously emit values as soon as it is tapped, so it tries to acquire the async lock. But the lock is held, so it waits on the Promise, but in doing so blocks progress of the calling react block, so the lock is never released. It’s a deadlock.

    An example like this did work under the previous model, though for not entirely great reasons. The 10 messages would be queued up, along with the done message of $s. Then, its work complete, the calling react block would get back control, and then the messages would be processed. This was OK if there was just a handful of messages. But something like this:

    my $s = supply {
        loop {
            emit ++$;
    react {
        whenever $s {
        whenever {

    Would hang, eating memory rapidly, until it ran out of memory, since it would just queue messages forever and never give back control.

    The new semantics are as follows: if the tap method call resulting from a whenever block being encountered causes an await that can not immediately be satisfied, then a continuation is taken, rooted at the whenever. It is put into a queue. Once the message (or initial setup) that triggered the whenever completes, and the lock is released, then those continuations are run. This process repeats until the queue is empty.

    What does this mean for the last two examples? The first one suspends at the first emit in the for ^10 { ... } loop, and is resumed once the setup work of the react block is completed. The loop then delivers the messages into the react block, producing them and having them handled one at a time, rather than queuing them all up in memory. The second example, which just hung and ate memory previously, now works as one would hope: it displays values for a tenth of a second, and then tears down the supply when the react block exits due to the done.

    This opens up supply blocks to some interesting new use cases. For example, this works now:

    my $s = supply {
        loop {
            emit ++$;
    react {
        whenever $s {

    Which isn’t itself useful (just use Supply.interval), but the general pattern here – of doing an asynchronous operation in a loop and emitting the result it gives each time – is. A supply emitting the results of periodic polling of a service, for example, is pretty handy, and now there’s a nice way to write it using the supply block syntax.

    Other recursion

    Not all recursive message delivery results from synchronous emits from a Supply tapped by a whenever. While the solution above gives nice semantics for those cases – carefully not introducing extra concurrency – it’s possible to get into situations where processing a message results in another message that loops back to the very same supply block. This typically involves a Supplier being emitted into. This isn’t common, but it happens.

    Recursive mutexes – which are used to implement Lock – keep track of which thread is currently holding the lock, using thread ID. This is the reason one cannot migrate code that is holding such a lock between threads, and thus why one being held prevents non-blocking await from being, well, non-blocking. Thus, a recursion detection mechanism based around thread IDs was not likely to end well.

    Instead, Lock::Async uses dynamic variables to keep track of which async locks are currently held. These are part of an invocation record, and so can safely be transported across thread pool threads, meaning they interact just fine with the Perl 6.d non-blocking await, and the new model of non-blocking handling of supply contention.

    But what does it do when it detects recursion? Clearly, it can’t just decide to forge ahead and process the message anyway, since that violates the “messages are processed one at a time and completely” principle.

    I mentioned earlier that Lock::Async has lock and unlock methods, but those are not particularly ideal for direct use: one must be terribly careful to make sure the unlock is never missed. Therefore, it has a protect method taking a closure. This is then run under the lock, thus factoring out the lock and unlock, meaning it only has to be got right in one place.

    There is also a protect-or-queue-on-recursion method. This is where the recursion detection comes in. If recursion is detected, then instead of the code being run now, a then is chained on to the Promise returned by lock, and the passed closure is run in the then. Effectively, messages that can’t be delivered right now because of recursion are queued for later, and will be sent from the thread pool.

    This mechanism’s drawback is that it becomes a place where concurrency is introduced. On the other hand, given we know that we’re introducing it for something that’s going to run under a lock anyway, that’s a pretty safe thing to be doing. A good property of the design is that recursive messages queue up fairly with external messages.

    Future work

    The current state is much better than what came before it. However, as usual, there’s more that can be done.

    One thing that bothers me slightly is that there are now two different mechanisms both dealing with different cases of message recursion: one for the case where it happens during the tapping of a supply caused by a whenever block, and one for other (and decidedly less common) cases. Could these somehow be unified? It’s not immediately clear to me either way. My gut feeling is that they probably can be, but doing so will involve some different trade-offs.

    This work has also improved the backpressure situation in various ways, but we’ve still some more to do in this area. One nice property of async locks is that you can check if you were able to acquire the lock or not before actually awaiting it. That can be used as feedback about how busy the processing path ahead is, and thus it can be used to detect and make decisions about overload. We also need to do some work to communicate down to the I/O library (libuv on MoarVM) when we need it to stop reading from things like sockets or process handles, because the data is arriving faster than the application is able to process it. Again, it’s nice that we’ll be able to do this improvement and improve the memory behavior of existing programs without those programs having to change.

    In summary…

    This work has replaced the two concurrency models that previously backed Supply with a single unified model. The new model makes better use of the thread pool, deals with back-pressure shortcomings with supply blocks, and enables some new use cases of supply and react. Furthermore, the new approach interacts well with Perl 6.d non-blocking await, removing a blocker for that.

    These are welcome improvements, although further unification may be possible, and further work on back-pressure is certainly needed. Thanks once again to for helping me dedicate the time to think about and implement the changes. If your organization would like to help me continue the journey, and/or my Perl 6 work in general, I’m still looking for funding.

    6guts: MoarVM Specializer Improvements Part 4: Argument Guards

    Published by jnthnwrthngtn on 2017-11-09T21:27:31

    So far in this series, I have discussed how the MoarVM dynamic optimizer gathers statistics, uses them to plan what to optimize, and then produces specialized versions of hot parts of a program to speed up execution. In this final part, I’ll look at how we switch from unoptimized code into optimized code, which centers around argument guards.

    But wait, what about code-gen?

    Ah, yes, I knew somebody would ask that. At the end of part 3, we had a data structure representing optimized code, perhaps for a routine, method, or block. While going from bytecode to a CFG in SSA form was a fairly involved process, going back to bytecode is far simpler: we iterate the basic blocks in order, iterate each of the instructions within a basic block, and write out the bytecode for each of instructions. Done!

    There are, of course, a few complications to take care of. When we have a forward branch, we don’t yet know the offset within the bytecode of the destination, so a table is needed to fix those up later. Furthermore, a new table of exception handler offsets will be needed, since the locations of the covered instructions and handlers will have moved. Beyond those bits of bookkeeping, however, there’s really not much more to it than a loop spitting out bytecode from instruction nodes.

    Unlike bytecode that is fed into the VM from the outside, we don’t spend time doing validation of the specialized bytecode, since we can trust that it is valid – we’re generating it in-process! Additionally, the specialized bytecode may make use of “spesh ops” – a set of extra opcodes that exist purely for spesh to generate. Some of them are non-logging forms of ops that would normally log statistics (no point logging after we’ve done the optimizations), but most are for doing operations that – without the proofs and guards done by spesh – would be at risk of violating memory safety. For example, there’s an op that simply takes an object offset and reads a pointer or integer from a certain number of bytes into it, which spesh can prove is safe to do, but in general would not be.

    What I’ve described so far is the portable behavior that we can do on any platform. So it doesn’t matter whether you’re running MoarVM on x86, x64, ARM, or something else, you can take advantage of all the optimizations that spesh can do. On x64, however, we can go a step further, and compile the spesh graph not back into specialized MoarVM bytecode, but instead into machine code. This eliminates the interpreter overhead. In MoarVM, we tend to refer to this stage as “the JIT compiler”, because most people understand JIT compilation as resulting in machine code. In reality, what most other VMs call their JIT compiler spans the same space that both spesh and the MoarVM JIT cover between them. MoarVM’s design means that we can deliver performance wins on all platforms we can run on, and then an extra win on x64. For more on the machine code generation process, I can recommend watching this talk by brrt, who leads work on it.

    Argument guards

    By this point, we have some optimized code. It was generated for either a particular callsite (a certain specialization) or a combination of callsite and incoming argument types (an observed type specialization). Next, we need a mechanism that will, upon a call, look at the available specializations and see if any of them match up with the incoming arguments. Provided one is found that matches, we can then call it.

    My original approach to this was to simply have a list of specializations, each tagged with a callsite and, for each object argument index, an expected type, whether we wanted a type object or a concrete object, and – for container types like Scalar – what type we expected to find on the inside of the container. This was simple to implement, but rather inefficient. Even if all of the type specializations were for the same callsite, it would be compared for each of them. Alternatively, if there were 4 specializations and 3 were on the same callsite, and one was on a second callsite, we’d have to do 3 failed comparisons on it to reach the final one that we were hunting.

    That might not sound overly bad, because comparing callsites is just comparing pointers, and so somewhat cheap (although it’s branching, and branches aren’t so friendly for CPUs). Where it gets worse is that parameter type checks worked the same way. Therefore, if there were 4 specializations of the same callsite, all of them against a Scalar argument with 4 different types of value inside of it, then the Scalar would have to be dereferenced up to 4 times. This isn’t ideal.

    My work during the summer saw the introduction of a new, tree-structured, approach. Each node in the tree represents either an operation (load an argument to test, read a value from a Scalar container) with a single child node, or a test with two child nodes representing “yes” and “no”. The leaves of the tree either indicate which specialization to use, or “no result”.

    The tree structure allows for loads, tests, and dereferences to be lifted out. Therefore, each argument needs to be loaded once, checked against a particular type once, and dereferenced once if it’s a container. So, if there were to be specializations for (Scalar:D of Int:D, Str:D) and (Scalar:D of Int:D, Num:D), then the first argument would be loaded one time and tested to see if it is a Scalar. If it is, then it will be dereferenced once, and the resulting value tested to see if it’s an Int. Both alternatives for the second argument are placed in the tree underneath this chain of tests, meaning that they do not need to be repeated.

    Arg guard trees are dumped in the specializer log for debugging purposes. Here is how the output looks for the situation described above:

    0: CALLSITE 0x7f5aa3f1acc0 | Y: 1, N: 0
    1: LOAD ARG 0 | Y: 2
    2: STABLE CONC Scalar | Y: 3, N: 0
    3: DEREF_RW 0 | Y: 4, N: 0
    4: DEREF_VALUE 0 | Y: 5, N: 0
    5: STABLE CONC Int | Y: 6, N: 0
    6: LOAD ARG 1 | Y: 7
    7: STABLE CONC Int | Y: 8, N: 9
    8: RESULT 0
    9: STABLE CONC Str | Y: 10, N: 0
    10: RESULT 1

    As the output suggests, the argument guard tree is laid out in a single block of memory – an array of nodes. This gives good cache locality on the lookups, and – since argument guard trees are pretty small – means we can use a small integer type for the child node indices rather than requiring a pointer worth of space.

    Immutability wins performance

    Additional specializations are generated over time, but the argument guard tree is immutable. When a new specialization is generated, the existing argument guard tree is cloned, and the clone is modified to add the new result. That new tree is then installed in place of the previous one, and the previous one can be freed at the next safe point.

    Why do this? Because it means that no locks need to be acquired to use a guard tree. In fact, since spesh runs on a single thread of its own, no locks are needed to update the guard trees either, since the single specializer thread means those updates are naturally serialized.

    Calls between specialized code

    In the last part of the series, I mentioned that part of specializing a call is to see if we can map it directly to a specialization. This avoids having to evaluate the argument guard tree of the target of the call, which is a decidedly nice saving. As a result, most uses of the argument guard are on the boundary between unspecialized and specialized code.

    But how does the optimizer see if there’s a specialization of the target code that matches the argument types being passed? It does it by evaluating the argument guard tree – but on facts, not real values.

    On Stack Replacement

    Switching into specialized code at the point of a call handles many cases, but misses an important one: that where the hot code is entered once, then sits in a loop for a long time. This does happen in various real world programs, but it’s especially common in benchmarks. It’s highly desirable to specialize the hot loop’s code, if possible inlining things into the loop body and compiling the result into machine code.

    I discussed detection of hot loops in an earlier part of this series. This time around, let’s take a look at the code for the osrpoint op:

        if (MVM_spesh_log_is_logging(tc))
        goto NEXT;

    The first part is about writing a log entry each time around the loop, which is what bumps the loop up in the statistics and causes a specialization to be generated. The call to MVM_spesh_osr_poll_for_result is the part that checks if there is a specialization ready, and jumps into it if so.

    One way we could do this is to evaluate the argument guard in every call to MVM_spesh_osr_poll_for_result to see if there’s an appropriate optimization. That would get very pricey, however. We’d like the interpreter to make decent progress through the work until the optimized version of the code is ready. So what to do?

    Every frame gets an ID on entry. By tracking this together with the number of specializations available last time we checked, we can quickly short-circuit running the argument guard when we know it will give the very same result as the last time we evaluated it, because nothing changed.

    MVMStaticFrameSpesh *spesh = tc->cur_frame->static_info->body.spesh;
    MVMint32 num_cands = spesh->body.num_spesh_candidates;
    MVMint32 seq_nr = tc->cur_frame->sequence_nr;
    if (seq_nr != tc->osr_hunt_frame_nr || num_cands != tc->osr_hunt_num_spesh_candidates) {
        /* Check if there's a candidate available and install it if so. */
        /* Update state for avoiding checks in the common case. */
        tc->osr_hunt_frame_nr = seq_nr;
        tc->osr_hunt_num_spesh_candidates = num_cands;

    If there is a candidate that matches, then we jump into it. But how? The specializer makes a table mapping the locations of osrpoint instructions in the unspecialized code to locations in the specialized code. If we produce machine code, a label is also generated to allow entry into the code at that point. So, mostly all OSR does is jump execution into the specialized code. Sometimes, things are approximately as easy as they sound.

    There’s a bonus feature hidden in all of this. Remember deoptimization, where we fall back to the interpreter to handle rarely occurring cases? This means we’ll encounter the osrpoint instructions in the unoptimized code again, and so – once the interpreter has done with the unusual case – we can enter back into the specialized, and possibly JIT-compiled code again. Effectively, spesh factors your slow paths out for you. And if you’re writing a module, it can do it differently based on different application’s use cases of the module.

    Future idea: argument guard compilation to machine code

    At the moment, the argument guard tree is walked by a little interpreter. However, on platforms where we have the capability, we could compile it into machine code. This would perhaps allow branch predictors to do a bit of a better job, as well as eliminate the overhead the interpreter brings (which, given the ops are very cheap, is much more significant here than in the main MoarVM interpreter).

    That’s all, folks

    I hope this series has been interesting, and provided some insight into how the MoarVM specializer works. My primary reason for writing it was to put my recent work on the specializer, funded by The Perl Foundation, into context, and I hope this has been a good bit more interesting than just describing the changes in isolation.

    Of course, there’s no shortage of further opportunities for optimization work, and I will be reporting on more of that here in the future. I continue to be looking for funding to help make that happen, beyond what I can do in the time I have aside from my consulting work. Rakudo Star Release 2017.10

    Published on 2017-11-09T00:00:00

    gfldex: Racing Rakudo

    Published by gfldex on 2017-11-05T17:39:33

    In many racing sports telemetry plays a big role in getting faster.  Thanks to a torrent of commits by lizmat you can use telemetry now too!

    perl6 -e 'use Telemetry; snapper(½); my @a = (‚aaaa‘..‚zzzz‘).pick(1000); say @a.sort.[*-1 / 2];'
    Telemetry Report of Process #30304 (2017-11-05T17:24:38Z)
    No supervisor thread has been running
    Number of Snapshots: 31
    Initial Size:        93684 Kbytes
    Total Time:          14.74 seconds
    Total CPU Usage:     15.08 seconds
    wallclock  util%  max-rss  gw      gtc  tw      ttc  aw      atc
       500951  53.81     8424
       500557  51.92     9240
       548677  52.15    12376
       506068  52.51      196
       500380  51.94     8976
       506552  51.74     9240
       500517  52.45     9240
       500482  52.33     9504
       506813  51.67     6864
       502634  51.63
       500520  51.78     6072
       500539  52.13     7128
       503437  52.29     7920
       500419  52.45     8976
       500544  51.89     8712
       500550  49.92     6864
       602948  49.71     8712
       500548  50.33
       500545  49.92      320
       500518  49.92
       500530  49.92
       500529  49.91
       500507  49.92
       506886  50.07
       500510  49.93     1848
       500488  49.93
       500511  49.93
       508389  49.94
       508510  51.27      264
        27636  58.33
    --------- ------ -------- --- -------- --- -------- --- --------
     14738710  51.16   130876
    wallclock  Number of microseconds elapsed
        util%  Percentage of CPU utilization (0..100%)
      max-rss  Maximum resident set size (in Kbytes)
           gw  The number of general worker threads
          gtc  The number of tasks completed in general worker threads
           tw  The number of timer threads
          ttc  The number of tasks completed in timer threads
           aw  The number of affinity threads
          atc  The number of tasks completed in affinity threads

    The snapper function takes an interval at which data is collected. On termination of the program the table above is shown.

    The module comes with plenty of subs to collect the same data at hand and file your own report. What may be sensible in long running processes. Or you call the reporter sub by hand every now and then.

    use Telemetry;
    react {
        whenever Supply.interval(60) {
            say report;

    If the terminal wont cut it you can use http to fetch telemetry data.

    Documentation isn’t finished nor is the module. So stay tuning for more data. Main Development Branch Renamed from "nom" to "master"

    Published on 2017-10-27T00:00:00

    gfldex: There Is More Than One Way At The Same Time

    Published by gfldex on 2017-10-22T21:21:16

    The Perl 6 Rosattacode section for parallel calculations is terribly outdated and missing all the goodies that where added or fixed in the last few weeks. With this post I would like to propose an updated version for Rosettacode. If you believe that I missed something feel free to comment below. Please keep in mind that Rosettacode is for showing off, not for being comprehensive.

    use v6.d.PREVIEW;


    Perl 6 provides parallel execution of code via threads. There are low level custructs that start a thread or safely pause execution.

    my $t1 = Thread.start({ say [+] 1..10_000_000 });
    my $t2 = Thread.start({ say [*] 1..10_000 });
    my $l =;
    $t1 = Thread.start: { $l.lock; say 'got the lock'; $l.unlock };
    sleep 2; $l.unlock;

    When processing lists, one can use a highlevel Iterator created by the methods hyper and race. The latter may return values out of order. Those Iterators will distribute the elements of the list to worker threads that are in turn assigned to OS level threads by Rakudos ThreadPoolScheduler. The whole construct will block until the last element is processed.

    my @values = 1..100;
    sub postfix:<!> (Int $n) { [*] 1..$n }
    say [+] -> $i { print '.' if $i %% 100; $i!.chars });

    For for-lovers there are the race for and hyper for keyword for distributing work over threads in the same way as their respective methods forms.

    race for 1..100 {
        say .Str; # There be out of order dragons!
    my @a = do hyper for 1..100 {
       .Int! # Here be thread dragons!
    say [+] @a;

    Perl 6 sports constructs that follow the reactive programming model. One can spin up many worker threads and use threadsafe Channels or Supplys to move values from one thread to another. A react-block can combine those streams of values, process them and react to conditions like cleaning up after a worker thread is done producing values or dealing with errors. The latter is done by bottling up Exception-objects into Failure-objects that keep track of where the error first occured and where it was used instead of a proper value.

    my \pipe =;
    start {
        for $*HOME {
            pipe.emit: .IO if .f & .ends-with('.txt');
            say „Looking in ⟨{.Str}⟩ for files that end in ".txt"“ if .IO.d;
            .IO.dir()».&?BLOCK when .IO.d;
            CATCH {
                default {
                    note .^name, ': ', .Str;
    react {
        whenever pipe.Supply {
            say „Checking ⟨{.Str}⟩ for "Rosetta".“;
            say „I found Rosetta in ⟨{.Str}⟩“ if try .open.slurp.contains('Rosetta');
            LAST {
                say ‚Done looking for files.‘;
            CATCH {
                default {
                    note .^name, ': ', .Str;
        whenever*10) {
            say „I gave up to find Rosetta after 10 minutes.“;

    Many build-in objects will return a Supply or a Promise. The latter will return a single value or just convey an event like a timeout. In the example above we used a Promise in that fashion. Below we shell out to find and process its output line by line. This could be used in a react block if there are many different types of events to process. Here we just tap into a stream of values and process them one by one. Since we don’t got a react block to provide a blocking event loop, we wait for find to finish with await and process it’s exitcode. Anything inside the block given to .tap will run in its own thread.

    my $find ='find', $*HOME, '-iname', '*.txt');
    $find.stdout.lines.tap: {
        say „Looking for "Rosetta" in ⟨$_⟩“;
        say „Found "Rosetta" in ⟨$_⟩“ if try .open.slurp.contains('Rosetta');
    await $find.start.then: {
        say „find finished with exitcode: “, .result.exitcode;

    Having operators process values in parallel via threads or vector units is yet to be done. Both hyper operators and Junctions are candidates for autothreading. If you use them today please keep in mind side effects may provide foot guns in the future.

    gfldex: It’s Classes All The Way Down

    Published by gfldex on 2017-10-08T17:23:05

    While building a cache for a web api that spits out JSON I found myself walking over the same data twice to fix a lack of proper typing. The JSON knows only about strings even though most of the fields are integers and timestamps. I’m fixing the types after parsing the JSON with JSON::Fast by coercively .map-ing . { # Here be heisendragons!
        .<lastchangetime> = .<lastchangetime>
            ??<lastchangetime>.subst(' ', 'T') ~ 'Z', :formatter(&ISO8601))
            !! DateTime;
        .<clickcount> = .<clickcount>.Int;
        .<lastcheckok> = .<lastcheckok>.Int.Bool;
        (note "$_/$stations-count processed" if $_ %% 1000) with $++;

    The hyper helps a lot to speed things up but will put a lot of stress on the CPU cache. There must be a better way to do that.

    Then lizmat showed where Rakudo shows its guts.

    m: grammar A { token a { }; rule a { } }
    OUTPUT: «5===SORRY!5=== Error while compiling <tmp>␤Package 'A' already has a regex 'a' 
    (did you mean to declare a multi-method?)␤

    Tokens are regex or maybe methods. But if tokens are methods then grammars must be classes. And that allows us to subclass a grammar.

    grammar WWW::Radiobrowser::JSON is JSON {
        token TOP {\s* <top-array> \s* }
        rule top-array      { '[' ~ ']' <station-list> }
        rule station-list   { <station> * % ',' }
        rule station        { '{' ~ '}' <attribute-list> }
        rule attribute-list { <attribute> * % ',' }
        token year { \d+ } token month { \d ** 2 } token day { \d ** 2 } token hour { \d ** 2 } token minute { \d ** 2 } token second { \d ** 2}
        token date { <year> '-' <month> '-' <day> ' ' <hour> ':' <minute> ':' <second> }
        token bool { <value:true> || <value:false> }
        token empty-string { '""' }
        token number { <value:number> }
        proto token attribute { * }
        token attribute:sym<clickcount> { '"clickcount"' \s* ':' \s* '"' <number> '"' }
        token attribute:sym<lastchangetime> { '"lastchangetime"' \s* ':' \s* '"' <date> '"' }
        token attribute:sym<lastcheckok> { '"lastcheckok"' \s* ':' \s* '"' <bool> '"' }

    Here we overload some tokens and forward calls to tokens that got a different name in the parent grammar. The action class follows suit.

    class WWW::Radiobrowser::JSON::Actions is JSON::Actions {
        method TOP($/) {
            make $<top-array>.made;
        method top-array($/) {
            make $<station-list>.made.item;
        method station-list($/) {
            make $<station>*.made).flat; # Here be heisendragons!
        method station($/) {
            make $<attribute-list>.made.hash.item;
        method attribute-list($/) {
            make $<attribute>».made.flat;
        method date($_) { .make:<year>.Int, .<month>.Int, .<day>.Int, .<hour>.Int, .<minute>.Int, .<second>.Num) }
        method bool($_) { .make: .<value>.made ?? Bool::True !! Bool::False }
        method empty-string($_) { .make: Str }
        method attribute:sym<clickcount>($/) { make 'clickcount' => $/<number>.Int; }
        method attribute:sym<lastchangetime>($/) { make 'lastchangetime' => $/<date>.made; }
        method attribute:sym<lastcheckok>($/) { make 'lastcheckok' => $/<bool>.made; }

    In case you wonder how to call a method with such a funky name, use the quoting version of postfix:<.>.

    class C { method m:sym<s>{} }'m:sym<s>'()

    I truncated the examples above. The full source can be found here. The .hyper-Version is still quite a bit faster but also heisenbuggy. In fact .hyper may not work at all when executed to fast after a program starts or when used in a recursive Routine. This is mostly due to the grammer being one of the oldest parts of Rakudo with the least amount of work to make it fast. That is a solvable problem. I’m looking forward to Grammar All The Things.

    If you got grammars please don’t hide them. Somebody might need them to be classy.


    samcv: Grant Final Report

    Published on 2017-09-25T07:00:00

    This contains the work since the last report as well as my final report.

    Table of Contents

    Work Since the Last Report

    Merged in Unicode Collation Algorithm

    I merged the Unicode Collation Algorithm branch into MoarVM. Now that the sort is stable, the coll, unicmp and .collate operators in Rakudo are no longer experimental so use experimental :collation no longer is needed to use them.

    The $*COLLATION dynamic variable is still hidden under experimental, since it is possible design changes could be made to them.


    In some of my other posts I talked about the difficulties of getting Prepend codepoints working properly. To do this I had changed how we store synthetics so as not to assume that the first codepoint of a synthetic is always the base character. This month I merged in change in synthetic representation and implemented the features which were now possible with the new representation.

    The feature was to detect which character is the base character and store its index in the synthetic. Most combiners, such as diacritics come after the base character and are Extend codepoints: a + ◌́. Prepend has the reverse functionality and comes before: ؀◌ + 9 (Arabic number sign + number).

    This required many assumptions our code rightfully made before Unicode 9.0 to be abandoned. When a synthetic is created, we now check to see if the first codepoint is a Prepend codepoint. If so, we keep checking until we find a codepoint that is not a Prepend. In most cases, the base character is the codepoint following the last Prepend mark.

    In degenerate cases there is no base character, which could be a grapheme composed of all Prepend’s or only Prepend’s and Extend’s. In these degenerate cases we set the first codepoint as the base character.

    Once I had that done, I was able to fix some of our ops which did not work correctly if there were Prepend characters. This included fixing our case changing op so it would now work on graphemes with Prepend marks. Since the case change ops apply the case change to the base codepoint, it is necessary for us to have the correct base character. Similarly, ordbaseat which gets the base codepoint also needed to be fixed. This allowed ignoremark to now work for graphemes with Prepend marks.


    I wrote documentation on our Unicode Collation Algorithm, which explains to the reader why the UCA solves, with examples of different single to multiple or multiple to single mappings of codepoints. It goes in a fair amount of detail on how it was implemented.


    Bugs with Encoding into UTF8-C8

    Since MoarVM normalizes all input text, our way of dealing with not normalizing, is important to people who want their strings to be unchanged unchanged. Previously there was a bug where if something was a valid UTF8 storable value, such as a Surrogate or a value higher than 0x10FFFF, it would create a Str type with that value, even though it was not valid Unicode. It would then throw when this value was attempted to be used (since the Str type shouldn’t hold values higher than 0x10FFFF or Surrogates). As this is the only way we have of dealing with text unaltered, this seemed like a serious issue that would prevent UTF8-C8 from being usable in a production environment attempting to encode arbitrary data into UTF8-C8. [f112fbcf]

    Bugs While Working with UTF8-C8 Strings

    Another issue I fixed was that under concatenation, text replacement or renormalization, the UTF8-C8 codepoints would be "flattened". They would lose their special properties and instead start acting like any other set of Unicode codepoints (although unusual since it contains a private use character and a hex code of the stored value). I changed our codepoint iterator so that optionally you can choose to pass back UTF8-C8 Synthetics unchanged. We use Synthetics to store both UTF8-C8 values as well as storing graphemes which contain multiple codepoints. When iterating by codepoint on an already existing arbitrary string, we want to retain the UTF8-C8 codepoints and make sure they are not changed during the renormalization process. This has been fixed, and UTF8-C8 strings are now drastically more reliable, and hopefully, much more production ready. [2f71945d]

    Grapheme Caching and Move To

    The function which moves a grapheme iterator forward a specified number of graphemes now works even if we aren’t starting from the very start of the string. In this function we have a first loop which locates the correct strand, and had a second loop which would find the correct grapheme inside that strand. I refactored the grapheme locating code and was able to remove the loop.

    In the grapheme caching implementation we can save a lot of work by not creating a new iterator for every grapheme access. Not only that, I also sped it the move_to function about 30%. While the cached iterator reduces access to this function for the functions I added it to, there are still many which may seek for each grapheme requested, this will speed that up.

    Other MoarVM Changes

    I setup automated Appveyor builds for MoarVM so we get automated builds on Windows (Travis CI only builds MacOS and Linux builds).

    I fixed a segfault that occurred when compiling nqp on Alpine Linux which uses musl as its libc. I ended up reducing the depth of recursion in the optimize_bb() function when compiling nqp from 99 to 29 (3x reduction). In the Rakudo build, we have a 5x reduction in the depth of recursion.

    Final Report

    As I’ve already said many things in my previous grant reports (1, 2, 3, 4) I will iterate on some of the big things I did do, which is not exhaustive. For full details of all the changes please see my other grant reports as well as a partial list of bonus changes I made in MoarVM during the grant at the bottom of the page.

    The only thing not completed was implementing a whole new UCD backend. While I planned on doing this, I ended up choosing not to do so. I realized that it would not have been the best use of my time on the grant, as there were many more user noticeable changes I could do. Despite this, I did achieve the goals that the rewrite was intended to solve; namely making property values distinct for each property and making the UCD database generation more reproducible. While it is not exactly the same on different runs, the only thing that changes is the internal property codes which does not affect anything adversely. It is fully functional every time instead of database regeneration breaking our tests most of the time. Once the database was stable, I was then able to update our database to Unicode 10. Without my improvements regarding reproducibility and property values becoming distinct for each property, updating to Unicode 10 would have not been possible. In addition all Hangul (Korean) characters now have names in the Unicode database.

    A big thing I wanted to implement was the Unicode Collation Algorithm, which ended up being a total success. I was able to still retain the ability to choose which collation levels the user wanted to sort with as well as reverse the sort order of individual collation levels.

    Yet I did not only implement one algorithm, I also implemented the Knuth-Morris-Prat string search algorithm which can take advantage of repeated characters in the needle (can be multiple times faster if you have sections repeating). The Knuth-Morris-Pratt algorithm was adjusted to use either the new cached grapheme iterator or the simple lookup depending on if it was a flat or strand haystack as well. Indexing a strand based string with a one grapheme long needle was sped up by 9x by making a special case for this.

    Practically all string ops were sped up, often by multiple times due to getting MVM_string_get_grapheme_at_nocheck inlined. In addition to this, I changed the way we access strings in many of our most used string ops, intelligently using either grapheme iterators, cached grapheme iterators or direct access depending on circumstances. With the MVM_string_get_grapheme_at_nocheck inlined, the time to accessing graphemes with this function was sped up between 1.5x for strands and up to 2x for flat strings. Ops we use a lot, like the op that backs eq and nqp::eqat was given special casing for Strand ←→ Strand, Flat ←→ Flat and Strand ←→ Flat (which also covers Flat ←→ Strand as well). This special casing spec up eq by 1.7x when one is a strand and one is flat, and 2.5x when both strings are flat. Applying similar optimizations to index made a 2x speedup when haystack is flat and 1.5x speedup when haystack is a strand (on top of the previous improvements due to the Knuth-Morris-Pratt algorithm)

    I fixed a longstanding bug in NQP which caused 'ignoremark+ignorecase' operations to be totally broken. I fixed this by adding more MoarVM ops and refactoring the code to have many less branches. In MoarVM we now use a centralized function to do each variation of with/without ignorecase and ignore mark which is also fully compatible with foldcase operations as well as igoremark.

    Doing the ignoremark/ignorecase indexing work sped them up by multiple times, but then in addition to that, it became 2x faster when the haystack was made up of 10 strands by implementing a cached grapheme iterator.

    I implemented full Unicode 9.0 support not just in our grapheme segmentation, but also in our other ops, refactoring how we store synthetic codepoints to allow us to have the 'base' codepoint be a codepoint other than the 1st in the synthetic to support Prepend codepoints.

    Our concatenation was improved so as to make full renormalization of both input strings no longer needed in almost all cases. The x repeat operator was fixed so it always creates normalized strings. Previously it could create unnormalized strings instead, causing issues when it was used.

    I believe I have more than accomplished what I set out to do in this grant. I made tons of user facing changes: to speed, Unicode normalization support, full Unicode 9.0 support. I added awesome collation features and fixed all the major issues with decoding and working with UTF8-C8 representations. I have listed an incomplete list of bonus deliverables below the deliverables which were part of this project.


    • I documented MoarVM’s string representation, with lots of good information for future developers as well as interested users.

    • Hangul syllables now have Unicode names in our database, with a test added in roast.

    • I implemented the Unicode Collation Algorithm [866623d9]

      • Tests have been added in roast for the UCA and the unicmp op

      • I wrote documentation on our Unicode Collation Algorithm implementation

      • Regarding language specific sort. This would involve us using data from the Unicode CLDR. Once we have this data available from MoarVM, it simply requires a conditional to override DUCET and check a different set of data before checking the DUCET data. This information is in our documentation for collation.

    • Text normalization

      • Speed of normalization was improved

      • Full Unicode 9.0 support for text segmentation and normalization was added

    • While I did not fully rewrite the database, I did solve the needed issues:

      • Property values are now unique for each property

      • Running the generation script creates a functional database every time it is run, rather than only some of the time.

      • I added Unicode Collation data to our database, generated from a Perl 6 script, which happened to be the only property required to complete my deliverables

    Bonus Deliverables

    Here is a somewhat complete list of bonus deliverables:

    • Updated our database to Unicode 10. This was only possible once I had fixed the problems with the database generation, and made property values unque.

    • Implemented Knuth-Morris-Pratt string search

    • Set up Appveyor builds. Appveyor builds and tests MoarVM on Windows, similar to Travis CI.

    • Fixed ignoremark+ignorecase regex when used together as well as huge speed increases.


    • Fix UTF8-C8 encoding so it can encode values > 0x10FFFF as well as surrogates

    • Fix UTF8-C8 strings so they do not get corrupted and flattened when string operations are performed on them.

    • MVM_string_utf8_decodestream: free the buffer on malformed UTF-8 [a22f98db]

    String Ops

    • Have MVM_string_codes iterate the string with codepoint iterator [ed84a632]

    • Make eqat 1.7x-2.5x faster [3e823659]

    • Speed up index 9x when Haystack is strand, needle is 1 long [0b364cb8]

    • Implement the Knuth-Morris-Pratt string search algorithm [6915d80e]

    • Add indexim_s op and improve/fix bugs in eqatim_s [127fa2ce]

    • Fix a bug in index/eqat(im) and in ord_getbasechar causing us to not decompose the base character when the grapheme was a synthetic [712cff33]

    • Fix MVM_string_compare to support deterministic comparing of synthetics [abc38137]

    • Added a codes op which gets the number of codepoints in a string rather than the number of graphemes. Rakudo is now multipe times faster doing the .codes op now. Before it would request an array of all the codepoints and then get number of elements, which was much slower.

    Fix string ops with Prepend characters

    • Rework MVMNFGSynthetic to not store base separately [3bd371f1]

    • Fix case change when base cp isn’t the first cp in synthetic [49b90b99]

    • For degenerate Synth’s with Prepend and Extend set base cp to 1st cp [db3102c4]

    • Fix ignoremark with Prepend characters and ordbaseat op [f8a639e2]

    Memory/GC/Build Fixes

    • Fix segfault when compiling nqp with musl as libc [5528083d]

    • Avoid recursion in optimize_bb() when only 1 child node [6d844e00]

    • Fix various memory access/garbage collection issues in some string ops that were showing up when running in Valgrind or using Address Sanitizer

    Grapheme Iteration

    • Ensure we can move forward in a grapheme iterator even if we aren’t starting at the very beginning of a string.

    • Use grapheme iterator cached for ignorecase/ignoremark index ops [b2770e27]

    • Optimize MVM_string_gi_move_to. Optimize the loop which finds the correct location within a strand so that it isn’t a loop and is just conditionals.[c2fc14dd]

    • Use MVMGraphemeIter_cached for strands in KMP index [ce76c994]

    • Allow MVM_string_get_grapheme_at_nocheck to be inlined

    • Refactor code into iterate_gi_into_string() to reduce code duplication [1e92fc96]

    Tests Added to Roast

    • Add tests for testing collation. Tests for the unicmp operator [5568a0d63]

    • Test that Hangul syllables return the correct Unicode name [6a4bc5450]

    • Add tests for case changes when we have Prepend codepoints [00554ccbd]

    • Add tests for x operator to ensure normalization retained [1e4fd2148] [59909ca9a]

    • Add a large number of string comparison tests [51c4ac08b]

      • Add tests to make sure synthetics compare properly with cmp [649d9dc50]

    • Improve ignoremark tests to cover many different cases [810e218c8]

      • Add ignoremark tests to cover JSON::Tiny regression + other issue [c185acc57]

    • Add generated tests (from UCD data) and manually created ones to ensure strings concatenation is stable, when the concatenated string would change the normalization. [2e1f7d92a][9f304b070] [64e385faf][0976d07b9][59909ca9a][2e1f7d92a] [88635564e] [a6bbc73cf] [a363e3ff1]

    • Add test for MoarVM Issue #566 .uniprop overflow [9227bc3d8]

    • Add tests to cover RT #128875 [0a80d0d2e]

    • Make new version of GraphemeBreakTest.t to better test grapheme segmentation [54c96043c]

    NQP Work

    Below is a listing of some of the commits I made to NQP. This included adding the ops I created over the course of the grant: eqatim, indexim, indexicim, eqaticim, and codes (gets the number of codepoints in a string rather than graphemes).

    The testing in NQP was inadequate for our string ops, so I added hundreds of tests for practically all of the string ops, so we could properly test the different variations of index* and eqat*.

    NQP Documentation

    • Add docs for a few variations of index/eqat [589a3dd5c] Bring the unicmp_s op docs up to date [091625799]

    • Document hasuniprop moar op [650840d74]

    NQP Tests

    • Add more index* tests to test empty string paths [8742805cb]

    • run indexicim through all of the indexic tests [26adef6be]

    • Add tests for RT #128875, ignorecase+ignoremark false positives [b96a0afe7]

    • Add tests for nqp::eqatim op on MoarVM [6fac0036e]

    Other Work

    • Added script to find undocumented NQP ops [0ead16794]

    • Added nqp::codes to QASTOperationsMAST.nqp [59421ffe1]

    • Update QASTRegexCompilerMAST to use new indexicim and eqaticim ops [18e40936a]

    • Added eqatim and indexim ops. Fix a bug when using ignoremark [9b94aae27]

    • Added nqp::hasuniprop op to QASTOperationsMAST.nqp [d03c4023a]

    p6steve: Clone Wars

    Published by p6steve on 2017-09-20T18:52:04

    Apologies to those that have OO steeped in their blood. I am a wary traveller in OO space,  maybe I am an technician, not an architect at heart. So for me, no sweeping frameworks unless and until they are needed. And, frankly, one can go a long way on procedural code with subroutines to gather repetitive code sequences.

    (And don’t get me started on functional programming…)

    Some time ago, I was tasked to write a LIMS in old perl. Physical ‘aliquots’ with injections of various liquids would be combined and recombined by bio-machines. This led to a dawning realization that these aliquot objects could be modelled in an object style with parent / child relationships. After a few weeks, I proudly delivered my lowly attempt at ‘Mu’ for this (and only this) problem. Kudos to the P6 team – after a couple of weeks in here, it’s just sensational the level of OO power that the real Mu delivers:

    Screenshot 2017-09-20 19.32.21

    Now, hard at work, on the perl6 version of Physics::Unit, I am wondering how to put the OO theory into productive practice. One of my aims was to find a medium sized (not core tools) problem that (i) I know something about and (ii) would be a good-sized problem to wrangle.

    So I am enjoying the chance to design some classes and some interfaces that will make this all hang together. But – as an explorer, it has become clear that I only have three options. The problem runs like this:

    Initially I had some success with object types ::T – but these only let you read the type and  duplicate if needed for a new left hand side container. Then I tried the built in (shallow) clone method. But…

    Ultimately, thanks to, I worked out that $x.perl.EVAL with some ~~ s/// substitions on the way would do the trick!

    Phew. Please comment below if you have a better way to share – or would like to point out the risks of this technique…

    p6steve: perl6 Atomic Fission

    Published by p6steve on 2017-09-03T18:46:48

    I have been listening to the reaction on the web to the incorporation of an emoji as a unicode symbol in perl6 rakudo. Here’s a flavour…

    ( )

    The rationale for the use of unicode symbols is as follows:

    BTW- ASCII versions are known as Texas versions since they are always bigger

    Certainly this has caused some consternation – ranging from how can I type ⚛️ on my keyboard (hit CTRL-CMD-SPACE if you are on macOS ) to this will never be accepted for the coding standards of my company.

    On reflection, while it is understandable that programmers have a well established comfort zone of ASCII text and using English for keywords, I think that perl6 is leading the way on an irresistible path. Of the 6.5bn people on the planet, only a small fraction prefer to work in English – or even in Latin alphabets. Now, the pioneering work to embed unicode in a programming language will open the doors to all kinds of invention. What about:

    And this, in combination with perl6 Grammars, opens some interesting conceptual doors.


    samcv: Grant Status Update 4

    Published by Samantha McVey on 2017-08-31T07:00:00

    This is my fourth grant update. A productive and noteworthy month. I gave two presentations at YAPC-EU in the Netherlands earlier this month, on High End Unicode and MoarVM Internals (links to my slides). It was my first Perl conference and I had a great time meeting people in the Perl 6 and Perl community!

    Despite the conference, I made some big changes this month: big speedups for indexing operations, as well as an update to the Unicode 10 database. Before I could regenerate the database I had to fix, the script that generates the database, to be more deterministic and have unique property values per property (without this regenerating it with the Unicode 10 data would result in partially broken Unicode properties). I also implemented the Knuth-Morris-Pratt string search algorithm for our index function.

    I have added documentation to MoarVM which is an overview of how our strings are represented as well as details about normalization, synthetics and other topics. On Hacker News someone noted that this was not documented anywhere, so I made sure to add documentation for this. If you are interested in some of the internals of MoarVM, I’m hoping the document should be pretty accessible even if you are not a MoarVM developer.

    Table of Contents


    Knuth-Morris-Pratt String Search Algorithm

    Previously we did not have any natively implemented efficient string search algorithms, we only had memmem which was optimized, but would only work if both strings were flat and both the same bitwidth per grapheme.

    Now all index operations with a 4096 or under length needle are optimized and no longer use brute force methods for searching (this does not include case insensitive or ignoremark indexing operations).

    When my KMP implementation in MVM_string_index is used:
    • Strings with non-matching types

      • That have more than one codepoint in the needle

      • That don’t have a needle more than 4096 graphemes long

    • Speedup can be small, or large depending on the pattern of the needle

      • Repeating letters can cause multiple times speed increase depending on the haystack

    We still use memmem when both strings are both flat strings and both the same data type (which use Knuth-Morris-Pratt or Booyer-Moore depending on platform). Most of the strings we will work with — especially once the strings have been modified — are strands.

    Grapheme Caching

    Since the Knuth-Morris-Pratt algorithm often will request the same grapheme again, but will never request an earlier point in the haystack, I was able to optimize the KMP string search function I added to cache the graphemes so we can use a grapheme iterator instead of using MVM_string_get_grapheme_at_nocheck which for strands, will have to find its place in the haystack from the beginning each time. What this grapheme caching does, is it caches the last returned grapheme, so if the same grapheme is requested again it continues to return that without requesting a new grapheme from the grapheme iterator. If it requests the next grapheme, or some number of graphemes after the current position, it will either grab the next grapheme or move the grapheme iterator forward (skip a certain number of graphemes) and then get a grapheme from the grapheme iterator.

    Here are some timings I got before and after the grapheme iterator for an English language book text file, searching for English words misspelled by one letter (so it would search the entire file but be representative of searching for actual words).




    index with needle (strand haystack)



    index with word needle (flat haystack)



    As you can see from the table, we actually even got savings when the haystack was flat. This surprised me, since getting a grapheme with a flat haystack points to a function with one switch and then returning the integer of the blob array at the specified position. I am guessing this is likely caused by the caching function generating more efficient machine code, since the savings can’t be only explained by the savings from caching the grapheme — the speed up was seen even when I manipulated the needle so that there were no cache “hits” and it always requested different graphemes.

    The grapheme caching has not yet been merged, but is ready to go after fixing some merge conflicts

    Inlining MVM_string_get_grapheme_at_nocheck

    After I finished writing the previous section, I was able to discover the reason for the speed difference with flat haystacks. By getting MVM_string_get_grapheme_at_nocheck to inline I was able to speed up index operations for a flat haystack by 2x. This is on top of the speedups of about we got from the KMP algorithm! This should affect any code which uses this function, making the function 2x as fast for flat strings, and likely a slight speedup for strands as well. This has huge implications as this function is used extensively throughout our string ops. This may change what I do with the grapheme caching code. It is likely I will change it so that it uses the grapheme caching for strands, and uses MVM_string_get_grapheme_at_nocheck for flat haystacks.

    Single Grapheme Needle Index

    I sped up string index by 9x when the haystack is a strand and the needle is 1 grapheme long. For this we use a grapheme iterator when looking for a single grapheme inside of a strand, and use a simpler faster loop since we are only looking for a single grapheme.


    Property Values

    Property values are now unique for each property in the Unicode Database. Since property values are non-unique, we must store them uniquely. Previously this would cause only the property whose value was last in the C data array to work. Now that property values are unique for each property code, that should no longer cause breakage.

    Unicode Database Updated to Unicode 10

    The Unicode database has been updated for Unicode 10. Now that the previous breaking point — the database not always generating properly each time — was solved, I was able to make changes to the Unicode database, including updating it to version 10. The main significant changes in this release were the addition of new scripts, symbols and Emoji. You can see the full list of changes here.

    Unicode Names

    Hangul (Korean) characters now have names in our name database. These are generated algorithmically on database creation by decomposing the characters and concatening the 2 or 3 resulting codepoint’s Jamo names. This is needed since the Unicode data file does not include the name for these codepoints and leaves it to the implementer to create them. A roast test has been added to check for support of these Korean characters.

    Fixed a bug in ignoremark/ordbaseat

    Ignoremark relys on the MoarVM ordbaseat op under the hood. For graphemes made up of a single character, we decompose the character and get the resulting base charactewhich it relays on assumed once we had gotten a synthetic’s base character that we already had the final base character. For non-synthetics we would decompose This wasn’t true for example with: "\c[LATIN SMALL LETTER J WITH CARON, COMBINING DOT BELOW]" The "j with caron" would not be decomposed because it was the base character of a synthetic.

    This also fixes a bug in indexim and eqatim ops which caused it to fail when encountering a synthetic.

    We now have ord_getbasechar handle synthetic and non-synthetic’s and have MVM_string_ord_basechar_at just handle the needed string based checks, then grabbing the grapheme from the string and passing its value on to ord_getbasechar.

    Synthetic Grapheme Rework

    I reworked MVMNFGSynthetic, the C struct which represents our synthetic graphemes. This is done to (eventually) get Prepend support working for things like ignoremark regex and the ordbaseat op which gets the base codepoint of a grapheme.

    Unlike all other marks which come after a base character, Prepend characters come before the base character. All of our current code assumed the first codepoint of the synthetic is the base character.

    For now, we also assume the first codepoint of the synthetic is the base character, but we now have a base_index which will be able to hold the index of the base character in the codepoint array.

    While this doesn’t add Prepend support to everything, this is one step toward getting that working, and decoupling base codepoints from being the first codepoint of a synthetic.

    This is not yet merged, but ready to go


    There’s not as much to say about this, since it is almost ready. As I said last month it is fully functional, and I have since done some work cleaning it up. Left to be done is integrating the data generation into the other Unicode database generation script. Although the file and code it uses to generate is different, ideally we will have only one script to run to generate and update all of our files on new Unicode releases.

    Previously I created a script which downloads all of the Unicode files needed in the generation, so an update should hopefully only require a run of the download script to fetch the UCD, the UCA (Unicode Collation Algorithm), and Emoji data.

    One thing I had not talked about previously was some of the ways I have sped up the UCA implementation to be quite fast and efficient, only having to evaluate the collation arrays efficiently even for very large strings. If you are interested, the details are in my slides.

    NQP Documentation

    In addition to the MoarVM string documentation I mentioned in the introduction, I also ensured all of the ops I have added over my project are documented in NQP.

    Ensured all the NQP index, eqat variations (indexic, indexim, indexicim, eqatic, eqaticim) are documented, and added the couple that had not yet been added to the ops list.

    Added a Perl 6 program which gets a list of ops which are not mentioned in the oplist which will hopefully be useful to other developers.

    The Goal is in Sight

    The grant is finally winding down. I have all significant things implemented although not everything has been merged yet. I also have implemented additional things that were not part of the grant (Knuth-Morris-Pratt string search algorithm).

    Left to be Done

    To finally complete my work and fullfill my objectives I will make any additional documentation or tests that need to be made. If other Perl 6 devs or users of the community want to make any requests for Unicode or string related documentation, you can send me an email or send me a message on freenode IRC (nick samcv).

    Other than this, I only need to clean up and merge the collation arrays branch, merge the synthetic grapheme reword, and update the grapheme caching for KMP branch to use caching for strand haystacks and use the now inlined MVM_string_get_grapheme_at_nocheck for flat haystacks.

    stmuk: Swiss Perl Workshop 2017

    Published by stmuk on 2017-08-30T17:48:17


    After a perilous drive up a steep, narrow, winding road from Lake Geneva we arrived at an attractive Alpine village (Villars-sur-Ollon) to meet with fellow Perl Mongers in a small restaurant.  There followed much talk and a little clandestine drinking of exotic spirits including Swiss whisky. The following morning walking to the conference venue there was an amazing view of mountain ranges. On arrival I failed to operate the Nespresso machine which I later found was due to it simply being off.  Clearly software engineers should never try to use hardware. At least after an evening of drinking.

    Wendy’s stall was piled high with swag including new Bailador (Perl 6 dancer like framework) stickers, a Shadowcat booklet about Perl 6 and the new O’Reilly “Thinking in Perl 6″. Unfortunately she had sold out of Moritz’s book “Perl 6 Fundamentals” (although there was a sample display copy present). Thankfully later that morning I discovered I had a £3 credit on Google Play Books so I bought the ebook on my phone.

    The conference started early with Damian Conway’s Three Little Words.  These were “has”, “class” and “method” from Perl 6 which he liked so much that he had added them to Perl 5 with his “Dios” – “Declarative Inside-Out Syntax” module.  PPI wasn’t fast enough so he had to replace it with a 50,000 character regex PPR. Practical everyday modules mentioned included Regexp::Optimizer and Test::Expr. If the video  doesn’t appear shortly on youtube a version of his talk dating from a few weeks earlier is available at

    Jonathan Worthington returned with his Perl 6 talk on “How does deoptimization help us go faster?” giving us insight into why Perl 6 was slow at the Virtual Machine level (specifically MoarVM). Even apparently simple and fast operations like indexing an array were slow due to powerful abstractions, late binding and many levels of Multiple Dispatch. In short the flexibility and power of such an extensible language also led to slowness due to the complexity of code paths. The AST optimizer helped with this at compile time but itself took time and it could be better to do this at a later compile time (like Just In Time).  Even with a simple program reading lines from a file it was very hard to determine statically what types were used (even with type annotations) and whether it was worth optimizing (since the file could be very short).

    The solution to these dynamic problems was also dynamic but to see what was happening needed cheap logging of execution which was passed to another thread.  This logging is made visible by setting the environment variable MVM_SPESH_LOG to a filename. Better tooling for this log would be a good project for someone.

    For execution planning we look for hot (frequently called) code, long blocks of bytecode (slow to run) and consider how many types are used (avoiding “megamorphic” cases with many types which needs many versions of code).  Also analysis of the code flow between different code blocks and SSA.  Mixins made the optimization particularly problematic.

    MoarVM’s Spesh did statistical analysis of the code in order to rewrite it in faster, simpler ways. Guards (cheap check for things like types) were placed to catch cases where it got it wrong and if these were triggered (infrequently) it would deoptimize as well, hence the counterintuitive title since “Deoptimization enables speculation” The slides are at with the video at The older and more dull witted of us (including myself) might find the latter part of the video more comprehensible at 0.75 Youtube speed.

    After a superb multi-course lunch (the food was probably the best I’d had at any Perl event) we returned promptly to hear Damian talk of “Everyday Perl 6”. He pointed out that it wasn’t necessary to code golf obfuscated extremes of Perl 6 and that the average Perl 5 programmer would see many things simpler in Perl 6.  Also a rewrite from 5 to 6 might see something like 25% fewer lines of code since 6 was more expressive in syntax (as well as more consistent) although performance problems remained (and solutions in progress as the previous talk had reminded us).

    Next Liz talked of a “gross” (in the numerical sense of 12 x 12 rather than the American teen sense) of Perl 6 Weeklies as she took us down memory lane to 2014 (just about when MoarVM was launched and when unicode support was poor!)  with some selected highlights and memories of Perl 6 developers of the past (and hopefully future again!). Her talk was recorded at


    Cal then spoke of Perl 6 maths which he thought was good with its Rats and FatRats but not quite good enough and his ideas of fixing it.  On the following day he showed us he had started some TDD work on TrimRats. He also told us that Newton’s Method wasn’t very good but generated a pretty fractal. See

    Lee spoke about how to detect Perl 5 memory leaks with various CPAN modules and his examples are at

    The day finished with Lightning Talks and a barbecue at givengain — a main sponsor.

    On the second day I noticed the robotic St Bernards dog in a tourist shop window had come to life.


    Damian kicked off the talks with my favourite of his talks,  “Standing on the Shoulders of Giants”, starting with the Countess of Lovelace and her Bernoulli number program. This generated a strange sequence with many zeros. The Perl 6 version since it used rational numbers not floating point got the zeros right whereas the Perl 5 version initially suffered from floating point rounding errors (which are fixable).

    Among other things he showed us how to define a new infix operator in Perl 6. He also showed us a Perl 6 sort program that looked exactly like LISP even down to the Lots of Irritating Superfluous Parentheses. I think this was quicksort (he certainly showed us a picture of Sir Tony Hoare at some point). Also a very functional (Haskell-like) equivalent  with heavy use of P6 Multiple Dispatch.  Also included was demonstration of P6 “before” as a sort of typeless/multi-type comparison infix. Damian then returned to his old favourite of Quantum Computing.

    My mind and notes got a bit jumbled at this point but I particularly liked the slide that explained how factorisation could work by observing the product of possible inputs since this led to a collapse that revealed the factors.  To do this on RSA etc., of course, needs real hardware support which probably only the NSA and friends have (?). Damian’s code examples are at with  an earlier version of his talk at Around this point there was a road race of classic cars going on outside up the main road into the village and there were car noises in the background that strangely were more relaxing than annoying.


    After Quantum Chaos Paul Johnson brought us all back down to ground with an excellent practical talk on modernising legacy Perl 5 applications based on his war stories. Hell, of course, is “Other People’s Code”, often dating from Perl’s early days and lacking documentation and sound engineering.

    Often the original developers had long since departed or, in the worse cases, were still there.  Adding tests and logging (with stack traces) were particularly useful. As was moving to git (although its steep learning curve meant mentoring was needed) and handling CPAN module versioning with pinto.  Many talks had spoken of the Perl 6 future whereas this spoke of the Perl 5 past and present and the work many of us suffer to pay the bills. It’s at

    File_000 (1)

    Jonathan then spoke of reactive distributed software.  A distributed system is an async one where “Is it working?” means “some of it is working but we don’t know which bits”.  Good OO design is “tell don’t ask” — you tell remote service to do something for you and not parse the response and do it yourself thus breaking encapsulation.  This is particularly important in building well designed distributed systems since otherwise the systems are less responsive and reliable.  Reactive (async) works better for distributed software than interactive (blocking or sync).

    We saw a table that used a Perl 6 promise for one value and a supply for many values for reactive (async) code and the equivalent (one value) and a Perl 6 Seq for interactive code. A Supply could be used for pub/sub and the Observer Pattern. A Supply could either be live (like broadcast TV) or, for most Perl 6 supplies, on-demand (like Netflix). Then samples of networking (socket) based code were discussed including a web client, web server and SSH::LibSSH (async client bindings often very useful in practical applications like port forwarding)

    Much of the socket code had a pattern of “react { whenever {” blocks with “whenever” as a sort of async loop.He then moved on from sockets to services (using a Supply pipeline) and amazed us by announcing the release of “cro”, a microservices library that even supports HTTP/2 and Websockets, at  This is installable using Perl 6 by “zef install –/test cro”.

    Slides at and video at

    Next Lee showed Burp Scanner which is payware but probably the best web vulnerabilities scanner. I wondered if anyone had dare run it on ACT or the hotel’s captive portal.

    Wendy did some cheerleading in her “Changing Image of Perl”.  An earlier version is at

    Sue’s talk was “Spiders, Gophers, Butterflies” although the latter were mostly noticeably absent. She promises me that a successor version of the talk will use them more extensively. Certainly any Perl 6 web spidering code is likely to fit better on one slide than the Go equivalent.

    During the lightning talks Timo showed us a very pretty Perl 6 program using his SDL2::Raw to draw an animated square spiral with hypnotic colour cycling type patterns. Also there was a talk by the author about— a distributed bug tracking system (which worked offline like git).

    Later in the final evening many of us ate and chatted in another restaurant where we witnessed a dog fight being narrowly averted and learnt that Wendy didn’t like Perl 5’s bless for both technical and philosophical reasons.

    p6steve: perl6 Module How To

    Published by p6steve on 2017-08-18T16:29:28

    Some investigation has discovered great resources on how to write and then list a perl6 module….

    p6steve: Physics::Unit in perl6

    Published by p6steve on 2017-08-18T16:23:07

    First and foremost, homage to the original authors of Physics::Unit and related perl5 CPAN modules. I would be honoured to hear from you and to collaborate in any way.

    What’s the big picture? TOP down, I have in mind:

    So, that said, given my poor state of knowledge of most of these things, my thinking is to start build BOTTOM up and see the shape of things that emerges, while learning some perl6 on the way.

    So, first I am going to need some MEASUREMENTS which are VALUES expressed in UNITS with associated ERROR.

    I took a gander at this CPAN module Physics::Udunits2 which is a perl5 interface to udunits2 and felt that the richness of it’s units and adherence to NIST guidelines were not of sufficient benefit to overcome my sense of incoherence.

    So, to cut a long story short, decided to take inspiration from Physics::Unit .

    Next, I needed some guidance on How to Build a perl6 module…