pl6anet

Perl 6 RSS Feeds

Steve Mynott (Freenode: stmuk) steve.mynott (at)gmail.com / 2017-04-26T21:19:15


Weekly changes in and around Perl 6: 2017.17 Interesting Times

Published by liztormato on 2017-04-24T21:45:59

Indeed. The past week saw the Rakudo Compiler Release 2017.04 have several point updates. Zoffix Znet explains it all in The Failure Point of a Release. The good news: if you’re waiting for a Rakudo Star Release of 2017.04, a release candidate is now available for testing. So please do!

Distribution License

Samantha McVey found that a lot of distributions in the ecosystem have a poor definition of the license they are released under. So she wrote a call to action in: Camelia Wants YOU to Add License Tags to Your Module! So please do!

The Perl Conference – US

The preliminary schedule for the Perl Conference - US on 19-21 June (formerly known as YAPC::NA) is now available. Please note that Damian Conway will be giving some interesting Perl 6 related tutorials!

Core Developments

Blog Posts

Wow, what a nice bunch of blog posts!

Meanwhile on Twitter

Meanwhile on StackOverflow

Ecosystem Additions

Winding down

Yours truly missed most of the excitement the past week on account of being on the road a lot. In a way, I’m glad I did. On the other hand, feels like I should have been around. Ah well, you can’t have it all. But if you want more, please check in again next week for more Perl 6 news!


samcv: Camelia Wants YOU to Add License Tags to Your Module!

Published on 2017-04-23T07:00:00

Open to scene, year 2017: With no good guidance on the license field, the ecosystem had at least as many variations for "Artistic 2.0" license as humans had fingers. But there was a hope that robot kind and human kind could work to solve this problem, together.

Most of our ecosystem modules that have licenses are Artistic 2.0. Here are just some of the variations of the license metedata tag we had in the ecosystem for the same license. Some were ambiguous as well:

The list goes on. Note: the ambiguous license names above (perl and Artistic) were found on modules that were provably Artistic 2.0 as they had a LICENSE file. I make no assertion that all modules using these ambiguous names are Artistic 2.0, and the list above only refers to what was found on actual Artistic 2.0 projects in the ecosystem.

This was by no fault of the module creators, as the docs.perl6.org example didn't even show a license field at all (this has now been updated with guidance on the license field). The original META spec in S22 used to say that the field should contain a URL to the license, and even if this had been consistent between modules in the ecosystem, it still would not have been very useful for computers or people to quickly figure out with certainty what license a project was under, as the URL's were at many different addresses for the same licenses.

It was clear the original spec was not sufficiently useful to computers or package managers, so the spec was changed to more conform to other parts of META. It was decided we would use SPDX standard identifiers in the license field, which are both human and computer readable. Then allowing for an optional URL to the license to go under the support key of the META (where other project URL's already go).

This new effort hopes to make sure the license fields in META are both computer and human useful to look at, so we standardized based on the SPDX identifiers which are the most widely used identifiers in the open source world.

Humans and Robots working together

We had 103 modules that had license fields, but were non-standard values. My robot agreed to make about 50 pull requests, as a show of good will towards us humans :). These automated pull requests were made when it was a full certainty which license the project had, (either by the LICENSE file or because the license field was "The Artistic 2" or something unambiguous but nonstandard).

There is a full list of the modules with non-standard licenses here where we are keeping track of the progress that has been made! The list is planned to expand to also cover modules with no license field at all, but the ones with license fields were much easier for my robot friend to deal with.

If you have a module in that list, or have a module with no license field (don't feel bad, until several days ago none of my modules had license fields either), it is your job to add one!

If you have any modules which me or my robot friend didn't PR, feel free to add those license fields to any other modules you may have. If you see a module and notice it has no license field in the meta file, feel free to submit a PR of your own if they have a LICENSE file showing which license the module is under, or if there is no license field, opening an issue so the author can make the change themself. If possible add it to the list above so we can keep track of it. As mentioned before, make sure to use SPDX identifiers.

For the details of the updated META spec regarding the license field, please see S22 here.

Thank you for doing your part to help make the ecosystem a better place!

P.S. On April 19th only 13% of modules had a license field at all. Now, 4 days later we are up to 20.5%! Keep up the good work everyone!

Zoffix Znet: The Failure Point of a Release

Published on 2017-04-23T00:00:00

The April Glitches in Rakudo Releases

6guts: Massively reducing MoarVM Fixed Size Allocator contention

Published by jnthnwrthngtn on 2017-04-22T14:37:35

The latest MoarVM release, 2017.04, contains a significant improvement for multi-threaded applications, especially those that are CPU-bound. I mentioned the improvement briefly on Twitter, because it’s hard to be anything other than brief on Twitter. This post contains a decidedly less brief, though hopefully somewhat interesting, description of what I did. Oh, and also a bonus footnote in which I attempt to prove the safety (or lack of safety) of a couple of lock free algorithms, because that’s fun, right?

The Fixed Size Allocator

The most obvious way to allocate memory in a C program is through functions like malloc and calloc. Indeed, we do this plenty in MoarVM. The malloc and calloc implementations in C libraries have certainly been tuned a bunch, but at the same time they have to have good behavior for a very wide range of programs. They also need to keep track of the sizes of allocations, since a call to free does not pass the size of the memory being released. And they need to try to avoid fragmentation, which can lead to out-of-memory errors occurring because the heap ends up with lots of small gaps, but none big enough to allocate a larger object.

When we know a few more properties of the memory usage of a program, and we have information around to know the size of the memory block we are freeing, it’s possible to do a little better. MoarVM does this in multiple ways.

One of them is by using a bump-the-pointer allocator for memory that is managed by the garbage collector. These have a header that points to a type table that knows the size of the object that was allocated, meaning the size information is readily available. And the GC can move objects around in memory, since it can find all of the references to an object and update them, meaning there is a way out of the fragmentation trap too.

The call stack is another example. In the absence of closures, it is possible to allocate a block of memory and use it like a stack. When a program makes a call, the current location in the memory block is taken as the address for the call frame memory, and the location is bumped by the frame size. This could be seen as a “push”, in stack terms. Because call frames are left in the opposite order to which they are entered, freeing them is just subtraction. This could be seen as a “pop”. Since holes are impossible, fragmentation cannot occur.

A third case is covered by the fixed size allocator. This is the most difficult of the three. It tries to do a better job than malloc and friends in the case that, at the point when memory is freed, we readily know the size of the memory. This allows it to create regions of memory that consist of N blocks of a fixed size, and allocate the memory out of those regions (which it calls “pages”). When a memory request comes in, the allocator first checks if it’s within the size range that the fixed size allocator is willing to handle. If it isn’t, it’s passed along to malloc. Otherwise, the size is rounded up to the nearest “bin size” (which are 8 bytes, 16 bytes, 24 bytes, and so forth). A given bin consists of:

If the free list contains any entries, then one of them will be taken. If not, then the pages will be considered. If the current page is not full, then the allocation will be made from it. Otherwise, another page will be allocated. When memory is freed, it is always added to the free list of the appropriate bin. Therefore, a longer-running program, in steady state, will typically end up getting all of its allocations from the free list.

Enter threads

Building a fixed size allocator for a single-threaded environment isn’t all that difficult. But what happens when it needs to cope with being used in a multi-threaded program? Well…it’s complicated. Clearly, it is not possible to have a single global fixed size allocator and have all of the threads just use it without any kind of concurrency control. Taking an item off the freelist is a multi-step process, and allocating from a page – or allocating a new page – is even more steps. Without concurrency control, there will be data races all over, and we’ll be handed a SIGSEGV in record time.

It’s worth stopping to consider what would happen if we were to give every thread its own fixed size allocator. This turns out to get messy fast, as memory allocated on one thread may be freed by another. A seemingly simple scheme is to say that the freed memory is simply appended to the freelist of the freeing thread’s fixed size allocator. Unfortunately, this has two bad properties.

  1. When the thread ends, we can’t just throw aways the pages – because bits of them may still be in use by other threads, or referenced in the free lists of other threads. So they’d need to be somehow “re-homed”, which is going to need some kind of coordination. Further measures may be needed to mitigate memory fragmentation in programs that spawn and join many threads during their lifetimes.
  2. Imagine a producer/consumer setup, where one thread does allocations and passes the allocated memory to another thread, which processes the data in the memory and frees it. The producing thread will build up a lot of pages to allocate out of. The consuming thread will build up an ever longer free list. Memory runs out. D’oh.

So, MoarVM went with a single global fixed size allocator. Of course, this has the drawback of needing concurrency control.

Concurrency control

The easiest possible form of concurrency control is to have threads acquire a mutex on every allocate and free operation. This has the benefit of being very straightforward to understand and reason about. It has the disadvantage of being extremely costly. Mutex acquisition can be relatively cheap, but it gets expensive when there is high contention – that is, lots of threads trying to obtain the lock. And since all CPU-bound threads will typically allocate some working memory, particularly in a VM for a dynamic language that doesn’t yet do escape analysis, that adds up to a lot of contention.

So, MoarVM did something more sophisticated.

First, the easy part. It’s possible to append to a free list with a CPU-provided atomic operation, provided taking from the freelist is also using one. So, no mutex acquisition is required for freeing memory. However, an atomic operation still requires a kind of locking down at the CPU level. It’s cheaper than a mutex acquire/release for sure, but there will still be contention between CPU cores for the cache line holding the head of the free list.

What about allocation? It turns out that we can not just take from a free list using an atomic operation without hitting the ABA problem (gory details in footnote). Therefore, some kind of locking is needed to ensure an ordering on the operations. In most cases, the atomic operation will work on the first attempt (it’s competing with frees, which happen without any kind of locking, meaning a retry will sometimes be needed). In cases where something will complete very rapidly, a spinlock may be used in place of a full-on mutex. So, the MoarVM fixed size allocator allocation scheme boiled down to:

  1. Acquire the spin lock.
  2. Try to take from the free list in a loop, until either we succeed or the free list is seen to be empty.
  3. Release the spin lock.
  4. If we failed to obtain memory from the free list, take the slow path to get memory from a page, allocating another page if needed. This slow path does acquire a real mutex.

Contention

First up, I’ll note that the strategy outlined above does beat the “just take a mutex for every allocate/free” approach – at least, in all of the benchmarks I’ve considered. Frees end up being lock free, and most of the allocations just do a spin lock and an atomic operation.

At the same time, contention means contention, and no lock free data structure or spinlock changes that. If multiple threads are constantly scrambling to work on the same memory location – such as the head of a free list – it’s going to get expensive. How expensive? On an Intel Core i7, obtaining a cache line that is held by another core exclusively – which it typically will be under contention – costs somewhere around 70 CPU cycles. It gets worse in a multi-CPU setup, where it could easily be hundreds of CPU cycles. Note this is just for one operation; the spinlock is a further atomic operation and, of course, it uses some cycles as it spins.

But how much could this end up costing in a real world Perl 6 application? I recently had chance to find out, and the numbers were ugly. Measurements obtained by perf showed that a stunning 40% of the application’s runtime was spent inside of the fixed size allocator. (Side note: perf is a sampling profiler, which – in my handwavey understanding – snapshots the callstack at regular intervals to figure out where time is being spent. My experience has been that sampling profilers tend to be better at showing up surprising costs like this than instrumenting profilers are, even if they are in some senses less precise.)

Making things better

Clearly, there was significant room for improvement. And, happily, things are now muchly improved and my real-world program did get something close to a 40% performance boost.

To make things better, I introduced per-thread freelists, while leaving pages global and retaining global free lists also.

Memory is allocated in the first place from global pages, as before. However, when it is freed, it is instead placed on a per-thread free list (with one free list per thread per size bin). When a thread needs memory, it first checks its thread-local free list to see if there is anything there. It will only then look at the global free list, or the global pages, if the thread-local free list cannot satisfy the memory request. The upshot of this is that the vast majority of allocations and frees performed by the fixed size allocator no longer have any contention.

However, as I mentioned earlier, one needs to be very careful when introducing things like thread-local freelists to not create bad behavior when a thread terminates or in producer/consumer scenarios. Therefore:

So, I believe this improvement is both good for performance without being badly behaved for any cases that previously would have worked out fine.

Can we do better?

Always! While the major contention bottleneck is gone, there are further opportunities for improvement that are worth exploring in the future.

In summary…

If you have CPU-bound multi-threaded Perl 6 programs, MoarVM 2017.04 could offer a big performance improvement. For my case, it was close to 40%. And the design lesson from this: on modern hardware, contention is really costly, and using a lock free data structure or picking the “best kind of lock” will not overcome that.


Footnote on the ABA vulnerability: It’s decidedly interesting – at least to me – that prepending to a free list can be safely done with a single atomic operation, but taking from it cannot be. Here I’ll attempt to provide a proof for these claims.

We’ll consider a single free list whose head lives at memory location F, and two threads, T1 and T2. We assume the existence of an atomic operation, TRY-CAS(location, old, new), which will – in a single CPU instruction that may not be interrupted – compare the value in memory pointed to by location with old and, if they match, replace it with new. (CAS is short for Compare And Swap.) The TRY-CAS function evaluates to true if the replacement took place, and false if not. The threads may be preempted (that is, taken off the CPU) at any point in time.

To show that allocation is vulnerable to the ABA problem, we just need to find an execution where it happens. First of all, we’ll define the operation ALLOCATE as:

1: do
2:     allocated = *F
3:     if allocated != NULL
4:         next = allocated.next    
5: while allocated != NULL && !TRY-CAS(F, allocated, next)
6: return allocated

And FREE(C) as:

1: do
2:     current = *F
3:     C.next = current;
4: while !TRY-CAS(F, current, C)

Let’s consider a case where we have 3 memory cells, C1, C2, and C3. The free list head F points to C1, which in turn points to C2, which in turn points to C3.

Thread T1 enters ALLOCATE, but is preempted immediately after the execution of line 4. At this point, allocated contains C1 and next contains C2.

Next, T2 calls ALLOCATE, and succeeds in making an allocation. F now points to C2. It again calls ALLOCATE, meaning that F now points to C3. It then calls FREE(C1). At this point, F points to C1 again, and C1 points to C3. Notice that at this point, cell C2 is considered to be allocated and in use.

Consider what happens if T1 is resumed. It performs TRY-CAS(F, C1, C2). This operation will succeed, because F does indeed currently point to C1. This means that F now come to point to C2. However, we earlier stated that C2 is allocated and in use, and therefore should not be in the free list. Therefore we have demonstrated the code to be buggy, and shown how the bug arises as a result of the ABA problem.

What of the claim that the FREE(C) is not vulnerable to the ABA problem? To be vulnerable to the ABA problem, another thread must be able to change the state of something that the correctness of the operation depends upon, but that is not tested by the TRY-CAS operation. Looking at FREE(C) again:

1: do
2:     current = *F
3:     C.next = current;
4: while !TRY-CAS(F, current, C)

We need to consider C and current. We can very reasonably make the assumption that the calling program is well-behaved, and will never use the cell C again after passing it to FREE(C) (unless it obtains it again in the future through another call to ALLOCATE, which cannot happen until FREE has inserted it into the free list). Therefore, C cannot be changed in any way other than the code in FREE changes it. The FREE operation holds the sole reference to C at this point.

Life is much more complicated for current. It is possible for a preemption at line 3 of FREE, followed by another thread allocating the cell pointed to by current and then freeing it again, which is certainly a case of an ABA state change. However, unlike the situation we saw in ALLOCATE, the FREE operation does not depend on the content of current. We can see this by noticing how it never looks inside of it, and instead just holds a reference to it. An operation cannot depend upon a value it never accesses. Therefore, FREE is not vulnerable to the ABA problem.


Strangely Consistent: The root of all eval

Published by Carl Mäsak

Ah, the eval function. Loved, hated. Mostly the latter.

$ perl -E'my $program = q[say "OH HAI"]; eval $program'
OH HAI

I was a bit stunned when the eval function was renamed to EVAL in Perl 6 (back in 2013, after spec discussion here). I've never felt really comfortable with the rationale for doing so. I seem to be more or less alone in this opinion, though, which is fine.

The rationale was "the function does something really weird, so we should flag it with upper case". Like we do with BEGIN and the other phasers, for example. With BEGIN and others, the upper-casing is motivated, I agree. A phaser takes you "outside of the normal control flow". The eval function doesn't.

Other things that we upper-case are things like .WHAT, which look like attributes but are really specially code-generated at compile-time into something completely different. So even there the upper-casing is motivated because something outside of the normal is happening.

eval in the end is just another function. Yes, it's a function with potentially quite wide-ranging side effects, that's true. But a lot of fairly standard functions have wide-ranging side effects. (To name a few: shell, die, exit.) You don't see anyone clamoring to upper-case those.

I guess it could be argued that eval is very special because it hooks into the compiler and runtime in ways that normal functions don't, and maybe can't. (This is also how TimToady explained it in the commit message of the renaming commit.) But that's an argument from implementation details, which doesn't feel satisfactory. It applies with equal force to the lower-cased functions just mentioned.

To add insult to injury, the renamed EVAL is also made deliberately harder to use:

$ perl6 -e'my $program = q[say "OH HAI"]; EVAL $program'
===SORRY!=== Error while compiling -e
EVAL is a very dangerous function!!! (use the MONKEY-SEE-NO-EVAL pragma to override this error,
but only if you're VERY sure your data contains no injection attacks)
at -e:1
------> program = q[say "OH HAI"]; EVAL $program⏏<EOL>

$ perl6 -e'use MONKEY-SEE-NO-EVAL; my $program = q[say "OH HAI"]; EVAL $program'
OH HAI

Firstly, injection attacks are a real issue, and no laughing matter. We should educate each other and newcomers about them.

Secondly, that error message ("EVAL is a very dangerous function!!!") is completely over-the-top in a way that damages rather than helps. I believe when we explain the dangers of code injection to people, we need to do it calmly and matter-of-factly. Not with three exclamation marks. The error message makes sense to someone who already knows about injection attacks; it provides no hints or clues for people who are unaware of the risks.

(The Perl 6 community is not unique in eval-hysteria. Yesterday I stumbled across a StackOverflow thread about how to turn a string with a type name into the corresponding constructor in JavaScript. Some unlucky soul suggested eval, and everybody else immediately piled on to point out how irresponsible that was. Solely as a knee-jerk reaction "because eval is bad".)

Thirdly, MONKEY-SEE-NO-EVAL. Please, can we just... not. 😓 Random reference to monkies and the weird attempt at levity while switching on a nuclear-chainsaw function aside, I find it odd that a function that enables EVAL is called something with NO-EVAL. That's not Least Surprise.

Anyway, the other day I realized how I can get around both the problem of the all-caps name and the problem of the necessary pragma:

$ perl6 -e'my &eval = &EVAL; my $program = q[say "OH HAI"]; eval $program'
OH HAI

I was so happy to realize this that I thought I'd blog about it. Apparently the very dangerous function (!!!) is fine again if we just give it back its old name. 😜

gfldex: You can call me Whatever you like

Published by gfldex on 2017-04-19T11:00:43

The docs spend many words to explain in great detail what a Whatever is and how to use it from the caller perspective. There are quite a few ways to support Whatever as a callee as I shall explain.

Whatever can be used to express “all of the things”. In that case we ask for the type object that is Whatever.

sub gimmi(Whatever) {};
gimmi(*);

Any expression that contains a Whatever * will be turned into a thunk. The latter happens to be a block without a local scope (kind of, it can be turned into a block when captured). We can ask specifically for a WhateverCode to accept Whatever-expressions.

sub compute-all-the-things(WhateverCode $c) { $c(42) }
say compute-all-the-things(*-1);
say (try say compute-all-the-things({$_ - 1})) // 'failed';
# OUTPUT: «41␤failed␤»

We could also ask for a Block or a Method as both come preloaded with one parameter. If we need a WhateverCode with more then one argument we have to be precise because the compiler can’t match a Callable sub-signature with a WhateverCode.

sub picky(WhateverCode $c where .arity == 2 || fail("two stars in that expession please") ) {
    $c.(1, 2)
}
say picky(*-*);
# OUTPUT: «-1␤»
say (try picky(*-1)) // $!;
# OUTPUT: «two stars in that expession please␤  in sub picky at …»

The same works with a Callable constraint, leaving the programmer more freedom what to supply.

sub picky(&c where .arity == 2) { c(1, 2) }

There are quite a few things a WhateverCode can’t do.

sub faily(WhateverCode $c) { $c.(1) }
say (try faily( return * )) // $!.^name;
# OUTPUT: «X::ControlFlow::Return␤»

The compiler can take advantage of that and provide compile time errors or get things done a little bit qicker. So trading the flexibility of Callable for a stricter WhateverCode constraint may make sense.


gfldex: Dealing with Fallout

Published by gfldex on 2017-04-19T09:51:53

The much welcome and overdue sanification of the IO-subsystem lead to some fallout in some of my code that was enjoyably easy to fix.

Some IO-operations used to return False or undefined values on errors returned from the OS. Those have been fixed to return Failure. As a result some idioms don’t work as they used to.

my $v = §some-filename.txt".IO.open.?slurp // 'sane default';

The conditional method call operator .? does not defuse Failure as a result the whole expression blows up when an error occures. Luckily try can be used as a statement, which will return Nil, so we can still use the defined-or-operator // to assign default values.

my $v = (try "some-filename.txt".IO.open.slurpy) // 'sane default';

The rational to have IO-operations throw explosives is simple. Filesystem dealings can not be atomic (at least seen from the runtime) and can fail unexpectetly due to cable tripping. By packaging exceptions in Failure objects Perl 6 allows us to turn them back into undefined values as we please.


Weekly changes in and around Perl 6: 2017.16 IO Hits The Road

Published by liztormato on 2017-04-17T23:49:38

Zoffiz Znet and his trusted bots just came out with the 2017.04 Rakudo Compiler Release. It contains the culmination of the IO grant work. A Rakudo Star release should be expected within the next few days, based on this compiler release. Apart from the IO work and all of the other optimization work that has been done, one thing to particularly note is the work that Samantha McVey has done on Unicode support and case-insensitive regex matching. Please have the appropriate amount of more efficient fun!

Perl Toolchain Summit

The Perl Toolchain Summit is now less than a month away. Some people actively developing on Perl 6 will also attend. There are even some Perl 6 related entries on the Project List. Hope to see more Perl 6 related items there soon. And to finally be able to make CPAN support for Perl 6 modules an actual thing!

Core Developments

Other blog posts

Meanwhile on Twitter

Meanwhile on StackOverflow

Meanwhile on FaceBook

Jonathan Stowe says:

If you have been experiencing difficulty accessing certain https web sites with one of the various HTTP clients, you may want to upgrade to the latest OpenSSL module – I’ve just implemented supported for the TLS server name extension which is required for an increasing number of virtual hosting arrangements. It also fixes Webservice::Soundcloud.

Ecosystem Additions

Winding Down

Unexpectedly brought to you from Copenhagen, Denmark. See you again next week for more Perl 6 news!


rakudo.org: PART 3: Information on Changes Due to IO Grant Work

Published by Zoffix Znet on 2017-04-17T20:22:46

The IO grant work is at its wrap up. This note lists some of the last-minute changes to the plans delineated in earlier communications ([1], [2], [3]). Most of the listed items do not require any changes to users’ code.

Help and More Info

If you need help or more information, please join our IRC channel and ask there. You can also contact the person performing this work via Twitter @zoffix or by talking to user Zoffix in our dev IRC channel

gfldex: Slipping in a Config File

Published by gfldex on 2017-04-17T15:31:24

I wanted to add a config file to META6::bin without adding another dependency and without adding a grammar or other forms of fancy (and therefore time consuming) parsers. As it turns out, .split and friends are more then enough to get the job done.

# META6::bin config file

general.timeout = 60
git.timeout = 120
git.protocol = https

That’s how the file should look like and I wanted a multidim Hash in the end to query values like %config<git><timeout>.

our sub read-cfg($path) is export(:HELPER) {
    use Slippy::Semilist;

    return unless $path.IO.e;

    my %h;
    slurp($path).lines\
        ».chomp\
        .grep(!*.starts-with('#'))\
        .grep(*.chars)\
        ».split(/\s* '=' \s*/)\
        .flat.map(-> $k, $v { %h{||$k.split('.').cache} = $v });

    %h
}

We slurp in the whole file and process it line by line. All newlines are removed and any line that starts with a # or is empty is skipped. We separate values and keys by = and use a Semilist Slip to build the multidim Hash. Abusing a .map that doesn’t return values is a bit smelly but keeps all operations in order.

A Semilist is the thing you can find in %hash{1;2;3} (same for arrays) to express multi-dimentionallity. Just using a normal list wont cut it because a list is a valid key for a Hash.

I had Rakudo::Slippy::Semilist laying around for quite some time but never really used it much because it’s cheating by using nqp-ops to get some decent speed. As it turned out it’s not really the operations on a Hash as the circumfix:<{ }>-operator itself that is causing a 20x speed drop. By calling .EXISTS-KEY and .BIND-KEY directly the speed hit shrinks down to 7% over a nqp-implementation.

It’s one of those cases where things fall into place with Perl 6. Being able to define my own operator in conjunction with ». allows to keep the code flowing in the order of thoughs instead of breaking it up into nested loops.


Perl 6 Maven: Benchmarking crypt with SHA-512 in Perl 6

Published by szabgab

samcv: Indexing Unicode Things, Improving Case Insensitive Regex

Published on 2017-04-15T07:00:00

In the 2017.04 release of Rakudo under the MoarVM backend, there will be some substantial improvements to regex speed.

I have been meaning to make a new blog post for some time about my work on Unicode in Perl 6. This is going to be the first post of several that I have been meaning to write. As a side note, let me mention that I have a Perl Foundation grant proposal which is related to working on Unicode for Perl 6 and MoarVM.

The first of these improvements I'm going to write about is case insensitive regex m:i/ /. MoarVM had formerly lowercased the haystack and the needle whenever nqp::indexic was called. The new code now also uses foldcase instead of lowercasing.

It ended up 1.8-3.3x faster than before, but it began when MasterDuke submitted a Pull Request which changed the underlying MoarVM function behind nqp::indexic (index ignorecase) to use foldcase instead of lowercase.

At first this seemed like a great and easy improvement, but shortly after there were some serious problems. You see, when you foldcase a string, sometimes the number of graphemes can change. One grapheme can become up to 3 new codepoints! For example the ligature ‘st’ will foldcase to ‘st’, even though it may lowercase to 'st'. The issue was, if the string to be searched contained any of these expanding characters, the nqp::indexic command's would be off by however many codepoints were increased!

’.fc.say; # st’.chars.say; # 1
'ffi'.fc.say; # ffi
'ffi'.chars.say; # 1
'ffi'.fc.chars.say; # 3ß’.fc.say; #  ss

So this was a real problem.

On the bright side of things, this allowed me to make many great changes in how case insensitive strings are searched for under Perl 6/MoarVM.

The nqp::index command does a lot of the effort when searching for a string. I discovered we had a nqp::indexic operation that searched for a string but ignored case, but it was not used everywhere. nqp::index was still used extensively and this required us changing case in both in Perl 6 to use nqp::index and also doing it when using the nqp::indexic (index ignore case). In addition MoarVM changed the case of the entire haystack and needle whenever the nqp::indexic operation was used.

On the MoarVM side I first worked on getting it working with foldcase, and quickly discovered that the only sane way to do this, was to begin foldcasing operations on the haystack only from the starting point sent to the indexic function. If you foldcased them before the requested index, it would screw up the offset. My solution was to foldcase the needle, and then foldcase each grapheme down the haystack, only as far as we needed to find our match, preventing useless work foldcasing parts of the string we did not need.

The offset of the needle that is found from the indexic op is relative to the original string, and it will expand characters as needed, but the returned offset will always be related to the original string, not its changed version, making the offsets useful and relevant information on where the needle is in the original string, not in the altered version. As with regex, we are looking for the match in the haystack, and so must be able to return the section of the string we have matched.

The end result is we now have a 1.8x to 3.3x (depending on not finding a match/finding a match at the beginning) faster case insensitive regex!

gfldex: Speeding up Travis

Published by gfldex on 2017-04-14T20:55:00

After some wiggling I managed to convince travis to use ubuntu packages to trim off about 4 minutes of a test. Sadly the .debs don’t come with build in zef, what would be another 40 seconds.

As follows a working .travis.yml.

sudo: required
before_install:
    - wget https://github.com/nxadm/rakudo-pkg/releases/download/2017.03_02/perl6-rakudo-moarvm-ubuntu16.04_20170300-02_amd64.deb
    - sudo dpkg --install perl6-rakudo-moarvm-ubuntu16.04_20170300-02_amd64.deb
    - sudo /opt/rakudo/bin/install_zef_as_root.sh
    - export PATH=/opt/rakudo/bin:$PATH
    - sudo chown -R travis.travis /home/travis/.zef/
install:
    - zef --debug install .
script:
- zef list --installed

Using a meta package in conjuction with .debs makes it quite easy to test if a module will work not just with bleeding Rakudo but with versions users might actually have.


brrt to the future: Function Call Milestone

Published by Bart Wiegmans on 2017-03-28T16:14:00

Hi everybody. It's high time for another update, and this time I have good news. The 'expression' JIT compiler can now compile native ('C') function calls (although it's not able to use the results). This is a major milestone because function calls are hard! (At least from the perspective of a compiler, and especially from the perspective of the register allocator). Also because native function calls are really very important in MoarVM. Most of its 'primitive' operations (like hash table access, string equality, big integer arithmetic) are implemented by invoking native functions, and so to compile almost any program the JIT has to compile many function calls.

What makes function calls 'hard' is that they must implement the 'calling convention' of the relevant 'application binary interface' (ABI). In short, the ABI specifies the locations of function call parameters.  A small number of parameters (on Windows, the first 4, for POSIX platforms, the first 6) are placed in registers, and if there are more parameters they are usually placed on the stack. Aside from the calling convention, the ABI also specifies the expected alignment of the stack pointer (per 16 bytes) and the registers a functions may overwrite (clobber in ABI-speak) and which registers must have their original values after the function returns. The last type of registers are called 'callee-saved'. Note that at least a few registers must be callee-saved, especially those related to call stack management, because if the callee function would overwrite those it would be impossible to return control back to the caller. By the way, manipulating exactly those registers is how the setjmp and longjmp 'functions' work.

So the compiler is tasked with generating code that ensures the correct values are placed in the correct registers. That sounds easy enough, but what if the these registers are taken by other values, and what if those other values might be required for another parameter? Indeed, what if the value in the %rdx register needs to be in the %rsi register, and the value of the %rsi register is required in the %rdx register? How to determine the correct ordering for shuffling the operands?

One simple way to deal with this would be to eject all values from registers onto the stack, and then to load the values from registers if they are necessary. However, that would be very inefficient, especially if most function calls have no more than 6 (or 4) parameters and most of these parameters are computed for the function call only. So I thought that solution wouldn't do.

Another way to solve this would be if the register allocator could ensure that values are placed in their correct registers directly,- especially for register parameters -  i.e. by 'precoloring'. (The name comes from register allocation algorithms that work by 'graph coloring', something I will try to explain in a later post). However, that isn't an option due to my choice of 'linear scan' as the register allocation algorithm. This is a 'greedy' algorithm, meaning that it decides the allocation for a live range as soon as it encounters them, and that it cannot revert that decision once it's been made. (If it could, it would be more like a dynamic programming algorithm). So to ensure that the allocation is valid I'd have to make sure that the information about register requirements is propagated backwards from the instructions to all values that might conflict with it... and that point we're no longer talking about linear scan, and I would be better off re-engineering a new algorithm. Not a very attractive option either!

Instead, I thought about it and it occurred to me that this problem seems a lot like unravelling a dependency graph, with a number of restrictions. That is to say, it can be solved by a topological sort. I map the registers to a graph structure as follows:

I linked to the topological sort page for an explanation of the problem, but I think my implementation is really quite different from that presented there. They use a node visitation map and a stack, I use an edge queue and and outbound count. A register transfer (edge) can be enqueued if it is clear that the destination register is not currently used. Transfers from registers to stack locations (as function call parameters) or local memory (to save the value from being overwritten by the called function) are also enqueued directly. As soon as the outbound count of a node reaches zero, it is considered to be 'free' and the inbound edge (if any) is enqueued.


Unlike a 'proper' dependency graph, cycles can and do occur, as in the example where '%rdx' and '%rsi' would need to swap places. Fortunately, because of the single-inbound edge rule, such cycles are 'simple' - all outbound edges not belonging to the cycle can be resolved prior to the cycle-breaking, and all remaining edges are part of the cycle. Thus, the cycle can always be broken by freeing just a single node (i.e. by copy to a temporary register).

The only thing left to consider are the values that are used after the function call returns (survive the function call) and that are stored in registers that the called function can overwrite (which is all of them, since the register allocator never selects callee-saved registers). So to make sure they are available afterwards, we must spill them. But there are a few spill strategies to choose from (terminology made up by me):

The current register allocator does a full spill when it's run out of registers, and it would make some sense to apply the same logic for function-call related spills. I've decided to use spill-and-restore, however, because a full spill complicates the sorting order (a value that used to be in a register is suddenly only in memory) and it can be wasteful, especially if the call only happens in an alternative branch. This is common for instance when assigning values to object fields, as that may sometimes require a write barrier (to ensure the GC tracks all references from 'old' to 'new' objects). So I'm guessing that it's going to be better to pay the cost of spilling and restoring only in those alternative branches, and that's why I chose to use spill-and-restore.

That was it for today. Although I think being able to call functions is a major milestone, this is not the very last thing to do. We currently cannot allocate any of the registers used for floating-point calculations, which is a relatively minor limitation since those aren't used very frequently. But I also need to do some more work to actually use function return values and apply generic register requirements of tiles. But I do think the day is coming near where we can start thinking about merging the new JIT with the MoarVM master branch, making it available to everybody. Until next time!

Weekly changes in and around Perl 6: 2017.15 Kaboom! ⁽¹⁾

Published by liztormato on 2017-04-10T20:57:57

Zoffix Znet did a massive amount of work on the IO Grant. Some of the highlights:

All in all a very good weekly result!

Other Core Developments

Blog Posts

Meanwhile on Twitter

Meanwhile on StackOverflow

Ecosystem Additions

Winding Down

Apart from these visible results, a lot of work is being done by TimToady, Bart Wiegmans and Paweł Murias that hasn’t come to full fruition just yet. Yours truly is very anxious to tell about them in the (near) future! So check in again next week!

⁽¹⁾ With apologies to Jonathan Stowe.


gfldex: Fork All The Things!

Published by gfldex on 2017-04-09T17:59:34

As requested by timotimo META6::bin is now able to fork a module on github by looking up its source in the ecosystem and telling git to clone it to the local FS.

meta6 --fork-module=Somebody::Else::Module

As a little bonus it will create a t/meta.t if possible. To be able to do so, META6::bin had to learn how to add dependencies to a META6.json-file.

meta6 --add-dep=Important::Module

I will add pullrequest creation as soon as I figured out how to convice the github api to do my bidding.

UPDATE: Pull requesting is in but not well tested (I don’t have any non-synthetic PRs to send right now). A META6.json is required to get the repo-name automatically. The youngest commit message sports the default PR title.

meta6 --pull-request

Perl 6 Maven: Encrypting Passwords in Perl 6 using crypt and SHA-512

Published by szabgab

Weekly changes in and around Perl 6: 2017.14 The IO Front Advances

Published by liztormato on 2017-04-03T22:18:56

Zoffix Znet really hit the ground running this week! After announcing his IO plan, publishing his progress report for the month of March and waiting for the end of the comment period, he published the IO Upgrade Information, and after some late insights, IO Upgrade Information, Part 2, which contain an up-to-date account how things are progressing. And there’s of course the list of IO issues he’s working on. If you’re interested in these developments, please check these out. And contact Zoffix with any feedback, the sooner the better!

Improving the Robustness of Unicode Support

Samantha McVey put up a grant proposal covering the following deliverables:

Check it out and give her your opinion!

Camelia in the Wild

A new section in the Perl 6 weekly where spottings of Camelia in the wild can be reported. This week’s spotting was at a concert of ARW in Brussels.

Try out Perl 6 online

If you would like to try out some Perl 6 code without wanting to install Rakudo, you can now also go to https://tio.run/nexus/perl6! Just type in your code, click the play button and see the result! Too bad it currently runs the 2017.01 release, which is now over 2 months old! Still, if you just want to test some code, that is pretty recent and beats many packages provided by some distributions.

NativeCall Introduction

Naoum Hankache‘s excellent Perl 6 Introduction now has a chapter introducing the NativeCall interface (explaining how you can easily call code from external libraries from your Perl 6 source code). For now that chapter is available in English only, but I have no doubt the other languages (Bulgarian, Chinese, Dutch, French, German, Japanese, Portuguese and Spanish) will follow soon!

Coverage reports

The Rakudo Perl 6 core has up-to-date coverage reports again. And now we also have coverage reports for Moar, thanks to Samantha McVey. So if you’re looking to add some tests to get better coverage, that’s where you can find which parts of the system are not tested yet!

Other Core Developments

Blog Posts

Meanwhile on Twitter

Meanwhile on StackOverflow

Meanwhile on FaceBook

  • Paul Bennett mentioned STOKE, an interesting approach to optimisation:

    The ACM has a paper on a new compiler optimization called STOKE. It calls itself “stochastic”, but they seem to mean something other than “random” … more like “capable of working outside the explicit order of operations as given”. It beats gcc -O3 by a significant margin.

  • Ecosystem Additions

    Winding Down

    Wow, what a busy week again. Please check in again next week for more Perl 6 news!


    rakudo.org: PART 2: Upgrade Information for Changes Due to IO Grant Work

    Published by Zoffix Znet on 2017-04-03T00:15:07

    We’re making more changes!

    Do the core developers ever sleep? Nope! We keep making Perl 6 better 24/7!

    Why?

    Not more than 24 hours ago, you may have read Upgrade Information for Changes Due to IO Grant Work. All of that is still happening.

    However, it turned out that I, (Zoffix), had an incomplete understanding of how changes in 6.d language will play along with 6.c stuff. My original assumption was we could remove or change existing methods, but that assumption was incorrect. Pretty much the only sane way to incompatibly change a method in an object in 6.d is to add a new method with a different name.

    Since I rather us not have, e.g. .child and .child-but-secure, for the next decade, we have a bit of an in-flight course correction:

    ORIGINAL PLAN was to minimize incompatibilities with existing 6.c language code; leave everything potentially-breaking for 6.d

    NEW PLAN is to right away add everything that does NOT break 6.c-errata specification, into 6.c language; leave everything else for 6.d. Note that current 6.c-errata specification for IO is sparse (the reason IO grant is running in the first place), so there’s lots of wiggle room to make most of the changes in 6.c.

    When?

    I (Zoffix) still hope to cram all the changes into 2017.04 release. Whether that’s overly optimistic, given the time constraints… we’ll find out on April 17th. If anything doesn’t make it into 2017.04, all of it definitely will be in 2017.05.

    What?

    Along with the original list in first Upgrade Information Notice, the following changes may affect your code. I’m excluding any non-conflicting changes.

    Potential changes:

    Changes for 6.d language:

    Help and More Info

    If you need help or more information, please join our IRC channel and ask there. You can also contact the person performing this work via Twitter @zoffix or by talking to user Zoffix in our dev IRC channel

    rakudo.org: Upgrade Information for Changes Due to IO Grant Work

    Published by Zoffix Znet on 2017-04-02T08:31:49

    As previously notified, there are changes being made to IO routines. This notice is to provide details on changes that may affect currently-existing code.

    When?

    Barring unforeseen delays, the work affecting version 6.c language is planned to be included in 2017.04 Rakudo Compiler release (planned for release on April 17, 2017) on which next Rakudo Star release will be based.

    Some or all of the work affecting 6.d language may also be included in that release and will be available if the user uses use v6.d.PREVIEW pragma. Any 6.d work that doesn’t make it into 2017.04 release, will be included in 2017.05 release.

    If you use development commits of the compiler (e.g. rakudobrew), you will
    receive this work as-it-happens.

    Why?

    If you only used documented features, the likelihood of you needing to change any of your code is low. The 6.c language changes due to IO Grant work affect either routines that are rarely used or undocumented routines that might have been used by users assuming they were part of the language.

    What?

    This notice describes only changes affecting existing code and only for 6.c language. It does NOT include any non-conflicting changes or changes slated for 6.d language. If you’re interested in the full list of changes, you can find it in the IO Grant Action Plan

    The changes that may affect existing code are:

    Help and More Info

    If you need help or more information, please join our IRC channel and ask there. You can also contact the person performing this work via Twitter @zoffix or by talking to user Zoffix in our dev IRC channel

    Perlgeek.de: Perl 6 By Example: Idiomatic Use of Inline::Python

    Published by Moritz Lenz on 2017-04-01T22:00:01

    This blog post is part of my ongoing project to write a book about Perl 6.

    If you're interested, either in this book project or any other Perl 6 book news, please sign up for the mailing list at the bottom of the article, or here. It will be low volume (less than an email per month, on average).


    In the two previous installments, we've seen Python libraries being used in Perl 6 code through the Inline::Python module. Here we will explore some options to make the Perl 6 code more idiomatic and closer to the documentation of the Python modules.

    Types of Python APIs

    Python is an object-oriented language, so many APIs involve method calls, which Inline::Python helpfully automatically translates for us.

    But the objects must come from somewhere and typically this is by calling a function that returns an object, or by instantiating a class. In Python, those two are really the same under the hood, since instantiating a class is the same as calling the class as if it were a function.

    An example of this (in Python) would be

    from matplotlib.pyplot import subplots
    result = subplots()
    

    But the matplotlib documentation tends to use another, equivalent syntax:

    import matplotlib.pyplot as plt
    result = plt.subplots()
    

    This uses the subplots symbol (class or function) as a method on the module matplotlib.pyplot, which the import statement aliases to plt. This is a more object-oriented syntax for the same API.

    Mapping the Function API

    The previous code examples used this Perl 6 code to call the subplots symbol:

    my $py = Inline::Python.new;
    $py.run('import matplotlib.pyplot');
    sub plot(Str $name, |c) {
        $py.call('matplotlib.pyplot', $name, |c);
    }
    
    my ($figure, $subplots) = plot('subplots');
    

    If we want to call subplots() instead of plot('subplots'), and bar(args) instead of `plot('bar', args), we can use a function to generate wrapper functions:

    my $py = Inline::Python.new;
    
    sub gen(Str $namespace, *@names) {
        $py.run("import $namespace");
    
        return @names.map: -> $name {
            sub (|args) {
                $py.call($namespace, $name, |args);
            }
        }
    }
    
    my (&subplots, &bar, &legend, &title, &show)
        = gen('matplotlib.pyplot', <subplots bar legend title show>);
    
    my ($figure, $subplots) = subplots();
    
    # more code here
    
    legend($@plots, $@top-authors);
    title('Contributions per day');
    show();
    

    This makes the functions' usage quite nice, but comes at the cost of duplicating their names. One can view this as a feature, because it allows the creation of different aliases, or as a source for bugs when the order is messed up, or a name misspelled.

    How could we avoid the duplication should we choose to create wrapper functions?

    This is where Perl 6's flexibility and introspection abilities pay off. There are two key components that allow a nicer solution: the fact that declarations are expressions and that you can introspect variables for their names.

    The first part means you can write mysub my ($a, $b), which declares the variables $a and $b, and calls a function with those variables as arguments. The second part means that $a.VAR.name returns a string '$a', the name of the variable.

    Let's combine this to create a wrapper that initializes subroutines for us:

    sub pysub(Str $namespace, |args) {
        $py.run("import $namespace");
    
        for args[0] <-> $sub {
            my $name = $sub.VAR.name.substr(1);
            $sub = sub (|args) {
                $py.call($namespace, $name, |args);
            }
        }
    }
    
    pysub 'matplotlib.pyplot',
        my (&subplots, &bar, &legend, &title, &show);
    

    This avoids duplicating the name, but forces us to use some lower-level Perl 6 features in sub pysub. Using ordinary variables means that accessing their .VAR.name results in the name of the variable, not the name of the variable that's used on the caller side. So we can't use slurpy arguments as in

    sub pysub(Str $namespace, *@subs)
    

    Instead we must use |args to obtain the rest of the arguments in a Capture. This doesn't flatten the list of variables passed to the function, so when we iterate over them, we must do so by accessing args[0]. By default, loop variables are read-only, which we can avoid by using <-> instead of -> to introduce the signature. Fortunately, that also preserves the name of the caller side variable.

    An Object-Oriented Interface

    Instead of exposing the functions, we can also create types that emulate the method calls on Python modules. For that we can implement a class with a method FALLBACK, which Perl 6 calls for us when calling a method that is not implemented in the class:

    class PyPlot is Mu {
        has $.py;
        submethod TWEAK {
            $!py.run('import matplotlib.pyplot');
        }
        method FALLBACK($name, |args) {
            $!py.call('matplotlib.pyplot', $name, |args);
        }
    }
    
    my $pyplot = PyPlot.new(:$py);
    my ($figure, $subplots) = $pyplot.subplots;
    # plotting code goes here
    $pyplot.legend($@plots, $@top-authors);
    
    $pyplot.title('Contributions per day');
    $pyplot.show;
    

    Class PyPlot inherits directly from Mu, the root of the Perl 6 type hierarchy, instead of Any, the default parent class (which in turn inherits from Mu). Any introduces a large number of methods that Perl 6 objects get by default and since FALLBACK is only invoked when a method is not present, this is something to avoid.

    The method TWEAK is another method that Perl 6 calls automatically for us, after the object has been fully instantiated. All-caps method names are reserved for such special purposes. It is marked as a submethod, which means it is not inherited into subclasses. Since TWEAK is called at the level of each class, if it were a regular method, a subclass would call it twice implicitly. Note that TWEAK is only supported in Rakudo version 2016.11 and later.

    There's nothing specific to the Python package matplotlib.pyplot in class PyPlot, except the namespace name. We could easily generalize it to any namespace:

    class PythonModule is Mu {
        has $.py;
        has $.namespace;
        submethod TWEAK {
            $!py.run("import $!namespace");
        }
        method FALLBACK($name, |args) {
            $!py.call($!namespace, $name, |args);
        }
    }
    
    my $pyplot = PythonModule.new(:$py, :namespace<matplotlib.pyplot>);
    

    This is one Perl 6 type that can represent any Python module. If instead we want a separate Perl 6 type for each Python module, we could use roles, which are optionally parameterized:

    role PythonModule[Str $namespace] is Mu {
        has $.py;
        submethod TWEAK {
            $!py.run("import $namespace");
        }
        method FALLBACK($name, |args) {
            $!py.call($namespace, $name, |args);
        }
    }
    
    my $pyplot = PythonModule['matplotlib.pyplot'].new(:$py);
    

    Using this approach, we can create type constraints for Python modules in Perl 6 space:

    sub plot-histogram(PythonModule['matplotlib.pyplot'], @data) {
        # implementation here
    }
    

    Passing in any other wrapped Python module than matplotlib.pyplot results in a type error.

    Summary

    Perl 6 offers enough flexibility to create function and method call APIs around Python modules. With a bit of meta programming, we can emulate the typical Python APIs close enough that translating from the Python documentation to Perl 6 code becomes easy.

    Subscribe to the Perl 6 book mailing list

    * indicates required

    Perl 6 Maven: Encrypting Passwords in Perl 6 using crypt

    Published by szabgab

    Zoffix Znet: But Here's My Dispatch, So callwith Maybe

    Published on 2017-03-28T00:00:00

    All about nextwith, nextsame, samewith, callwith, callsame, nextcallee, and lastcall

    Weekly changes in and around Perl 6: 2017.13 IO’s not the same

    Published by liztormato on 2017-03-27T22:15:48

    Zoffix Znet has published the IO action plan as part of the IO grant work. Please read it and if you have any comments, let Zoffix know! Alternately / Additionally, you might want to check out the February Grant Report. In the past week, several IO related parts of Rakudo have already become significantly faster!

    Hyper / Race Semantics

    Jonathan Worthington thought a lot on the issues surrounding hyper and race and produced a plan, inviting comments. Please have a look at it and come up with a better name for Seqqy. 🙂

    Fernando Correa Welcome!

    Fernando Correa has joined the ranks of the Rakudo Perl 6 Core Developers! As SmokeMachine he already has quite a few merged Pull Requests under his belt. We’re all looking forward to seeing more of his excellent work. A hearty welcome on behalf of everybody interested in Rakudo Perl 6!

    Other Core Developments

    Blog Posts

    Meanwhile on Twitter

    Meanwhile on StackOverflow

    Ecosystem Additions

    Only one addition this week…

    Winding Down

    Posted a little later than usual, all because yours truly was pleasantly detained at a concert in Brussels. Please check in again next week for more Perl 6 news!


    Perlgeek.de: Perl 6 By Example: Stacked Plots with Matplotlib

    Published by Moritz Lenz on 2017-03-25T23:00:01

    This blog post is part of my ongoing project to write a book about Perl 6.

    If you're interested, either in this book project or any other Perl 6 book news, please sign up for the mailing list at the bottom of the article, or here. It will be low volume (less than an email per month, on average).


    In a previous episode, we've explored plotting git statistics in Perl 6 using matplotlib.

    Since I wasn't quite happy with the result, I want to explore using stacked plots for presenting the same information. In a regular plot, the y coordiante of each plotted value is proportional to its value. In a stacked plot, it is the distance to the previous value that is proportional to its value. This is nice for values that add up to a total that is also interesting.

    Matplotlib offers a method called stackplot for that. Contrary to multiple plot calls on subplot object, it requires a shared x axis for all data series. So we must construct one array for each author of git commits, where dates with no value come out as zero.

    As a reminder, this is what the logic for extracting the stats looked like in the first place:

    my $proc = run :out, <git log --date=short --pretty=format:%ad!%an>;
    my (%total, %by-author, %dates);
    for $proc.out.lines -> $line {
        my ( $date, $author ) = $line.split: '!', 2;
        %total{$author}++;
        %by-author{$author}{$date}++;
        %dates{$date}++;
    }
    my @top-authors = %total.sort(-*.value).head(5)>>.key;
    

    And some infrastructure for plotting with matplotlib:

    my $py = Inline::Python.new;
    $py.run('import datetime');
    $py.run('import matplotlib.pyplot');
    sub plot(Str $name, |c) {
        $py.call('matplotlib.pyplot', $name, |c);
    }
    sub pydate(Str $d) {
        $py.call('datetime', 'date', $d.split('-').map(*.Int));
    }
    
    my ($figure, $subplots) = plot('subplots');
    $figure.autofmt_xdate();
    

    So now we have to construct an array of arrays, where each inner array has the values for one author:

    my @dates = %dates.keys.sort;
    my @stack = $[] xx @top-authors;
    
    for @dates -> $d {
        for @top-authors.kv -> $idx, $author {
            @stack[$idx].push: %by-author{$author}{$d} // 0;
        }
    }
    

    Now plotting becomes a simple matter of a method call, followed by the usual commands adding a title and showing the plot:

    $subplots.stackplot($[@dates.map(&pydate)], @stack);
    plot('title', 'Contributions per day');
    plot('show');
    

    The result (again run on the zef source repository) is this:

    Stacked plot of zef contributions over time

    Comparing this to the previous visualization reveals a discrepancy: There were no commits in 2014, and yet the stacked plot makes it appear this way. In fact, the previous plots would have shown the same "alternative facts" if we had chosen lines instead of points. It comes from matplotlib (like nearly all plotting libraries) interpolates linearly between data points. But in our case, a date with no data points means zero commits happened on that date.

    To communicate this to matplotlib, we must explicitly insert zero values for missing dates. This can be achieved by replacing

    my @dates = %dates.keys.sort;
    

    with the line

    my @dates = %dates.keys.minmax;
    

    The minmax method finds the minimal and maximal values, and returns them in a Range. Assigning the range to an array turns it into an array of all values between the minimal and the maximal value. The logic for assembling the @stack variable already maps missing values to zero.

    The result looks a bit better, but still far from perfect:

    Stacked plot of zef contributions over time, with missing dates mapped to zero

    Thinking more about the problem, contributions from separate days should not be joined together, because it produces misleading results. Matplotlib doesn't support adding a legend automatically to stacked plots, so this seems to be to be a dead end.

    Since a dot plot didn't work very well, let's try a different kind of plot that represents each data point separately: a bar chart, or more specifically, a stacked bar chart. Matplotlib offers the bar plotting method, and a named parameter bottom can be used to generate the stacking:

    my @dates = %dates.keys.sort;
    my @stack = $[] xx @top-authors;
    my @bottom = $[] xx @top-authors;
    
    for @dates -> $d {
        my $bottom = 0;
        for @top-authors.kv -> $idx, $author {
            @bottom[$idx].push: $bottom;
            my $value = %by-author{$author}{$d} // 0;
            @stack[$idx].push: $value;
            $bottom += $value;
        }
    }
    

    We need to supply color names ourselves, and set the edge color of the bars to the same color, otherwise the black edge color dominates the result:

    my $width = 1.0;
    my @colors = <red green blue yellow black>;
    my @plots;
    
    for @top-authors.kv -> $idx, $author {
        @plots.push: plot(
            'bar',
            $[@dates.map(&pydate)],
            @stack[$idx],
            $width,
            bottom => @bottom[$idx],
            color => @colors[$idx],
            edgecolor => @colors[$idx],
        );
    }
    plot('legend', $@plots, $@top-authors);
    
    plot('title', 'Contributions per day');
    plot('show');
    

    This produces the first plot that's actually informative and not misleading (provided you're not color blind):

    Stacked bar plot of zef contributions over time

    If you want to improve the result further, you could experiment with limiting the number of bars by lumping together contributions by week or month (or maybe $n-day period).

    Next, we'll investigate ways to make the matplotlib API more idiomatic to use from Perl 6 code.

    Subscribe to the Perl 6 book mailing list

    * indicates required

    Perl 6 Maven: Getting started with Rakudo Perl 6 in a Docker container

    Published by szabgab

    Perlgeek.de: Perl 6 By Example: Plotting using Matplotlib and Inline::Python

    Published by Moritz Lenz on 2017-03-18T23:00:01

    This blog post is part of my ongoing project to write a book about Perl 6.

    If you're interested, either in this book project or any other Perl 6 book news, please sign up for the mailing list at the bottom of the article, or here. It will be low volume (less than an email per month, on average).


    Occasionally I come across git repositories, and want to know how active they are, and who the main developers are.

    Let's develop a script that plots the commit history, and explore how to use Python modules in Perl 6.

    Extracting the Stats

    We want to plot the number of commits by author and date. Git makes it easy for us to get to this information by giving some options to git log:

    my $proc = run :out, <git log --date=short --pretty=format:%ad!%an>;
    my (%total, %by-author, %dates);
    for $proc.out.lines -> $line {
        my ( $date, $author ) = $line.split: '!', 2;
        %total{$author}++;
        %by-author{$author}{$date}++;
        %dates{$date}++;
    }
    

    run executes an external command, and :out tells it to capture the command's output, and makes it available as $proc.out. The command is a list, with the first element being the actual executable, and the rest of the elements are command line arguments to this executable.

    Here git log gets the options --date short --pretty=format:%ad!%an, which instructs it to print produce lines like 2017-03-01!John Doe. This line can be parsed with a simple call to $line.split: '!', 2, which splits on the !, and limits the result to two elements. Assigning it to a two-element list ( $date, $author ) unpacks it. We then use hashes to count commits by author (in %total), by author and date (%by-author) and finally by date. In the second case, %by-author{$author} isn't even a hash yet, and we can still hash-index it. This is due to a feature called autovivification, which automatically creates ("vivifies") objects where we need them. The use of ++ creates integers, {...} indexing creates hashes, [...] indexing and .push creates arrays, and so on.

    To get from these hashes to the top contributors by commit count, we can sort %total by value. Since this sorts in ascending order, sorting by the negative value gives the list in descending order. The list contains Pair objects, and we only want the first five of these, and only their keys:

    my @top-authors = %total.sort(-*.value).head(5).map(*.key);
    

    For each author, we can extract the dates of their activity and their commit counts like this:

    my @dates  = %by-author{$author}.keys.sort;
    my @counts = %by-author{$author}{@dates};
    

    The last line uses slicing, that is, indexing an array with list to return a list elements.

    Plotting with Python

    Matplotlib is a very versatile library for all sorts of plotting and visualization. It's written in Python and for Python programs, but that won't stop us from using it in a Perl 6 program.

    But first, let's take a look at a basic plotting example that uses dates on the x axis:

    import datetime
    import matplotlib.pyplot as plt
    
    fig, subplots = plt.subplots()
    subplots.plot(
        [datetime.date(2017, 1, 5), datetime.date(2017, 3, 5), datetime.date(2017, 5, 5)],
        [ 42, 23, 42 ],
        label='An example',
    )
    subplots.legend(loc='upper center', shadow=True)
    fig.autofmt_xdate()
    plt.show()
    

    To make this run, you have to install python 2.7 and matplotlib. You can do this on Debian-based Linux systems with apt-get install -y python-matplotlib. The package name is the same on RPM-based distributions such as CentOS or SUSE Linux. MacOS users are advised to install a python 2.7 through homebrew and macports, and then use pip2 install matplotlib or pip2.7 install matplotlib to get the library. Windows installation is probably easiest through the conda package manager, which offers pre-built binaries of both python and matplotlib.

    When you run this scripts with python2.7 dates.py, it opens a GUI window, showing the plot and some controls, which allow you to zoom, scroll, and write the plot graphic to a file:

    Basic matplotlib plotting window

    Bridging the Gap

    The Rakudo Perl 6 compiler comes with a handy library for calling foreign functions, which allows you to call functions written in C, or anything with a compatible binary interface.

    The Inline::Python library uses the native call functionality to talk to python's C API, and offers interoperability between Perl 6 and Python code. At the time of writing, this interoperability is still fragile in places, but can be worth using for some of the great libraries that Python has to offer.

    To install Inline::Python, you must have a C compiler available, and then run

    $ zef install Inline::Python
    

    (or the same with panda instead of zef, if that's your module installer).

    Now you can start to run Python 2 code in your Perl 6 programs:

    use Inline::Python;
    
    my $py = Inline::Python.new;
    $py.run: 'print("Hello, Pyerl 6")';
    

    Besides the run method, which takes a string of Python code and execute it, you can also use call to call Python routines by specifying the namespace, the routine to call, and a list of arguments:

    use Inline::Python;
    
    my $py = Inline::Python.new;
    $py.run('import datetime');
    my $date = $py.call('datetime', 'date', 2017, 1, 31);
    $py.call('__builtin__', 'print', $date);    # 2017-01-31
    

    The arguments that you pass to call are Perl 6 objects, like three Int objects in this example. Inline::Python automatically translates them to the corresponding Python built-in data structure. It translate numbers, strings, arrays and hashes. Return values are also translated in opposite direction, though since Python 2 does not distinguish properly between byte and Unicode strings, Python strings end up as buffers in Perl 6.

    Object that Inline::Python cannot translate are handled as opaque objects on the Perl 6 side. You can pass them back into python routines (as shown with the print call above), or you can also call methods on them:

    say $date.isoformat().decode;               # 2017-01-31
    

    Perl 6 exposes attributes through methods, so Perl 6 has no syntax for accessing attributes from foreign objects directly. If you try to access for example the year attribute of datetime.date through the normal method call syntax, you get an error.

    say $date.year;
    

    Dies with

    'int' object is not callable
    

    Instead, you have to use the getattr builtin:

    say $py.call('__builtin__', 'getattr', $date, 'year');
    

    Using the Bridge to Plot

    We need access to two namespaces in python, datetime and matplotlib.pyplot, so let's start by importing them, and write some short helpers:

    my $py = Inline::Python.new;
    $py.run('import datetime');
    $py.run('import matplotlib.pyplot');
    sub plot(Str $name, |c) {
        $py.call('matplotlib.pyplot', $name, |c);
    }
    
    sub pydate(Str $d) {
        $py.call('datetime', 'date', $d.split('-').map(*.Int));
    }
    

    We can now call pydate('2017-03-01') to create a python datetime.date object from an ISO-formatted string, and call the plot function to access functionality from matplotlib:

    my ($figure, $subplots) = plot('subplots');
    $figure.autofmt_xdate();
    
    my @dates = %dates.keys.sort;
    $subplots.plot:
        $[@dates.map(&pydate)],
        $[ %dates{@dates} ],
        label     => 'Total',
        marker    => '.',
        linestyle => '';
    

    The Perl 6 call plot('subplots') corresponds to the python code fig, subplots = plt.subplots(). Passing arrays to python function needs a bit extra work, because Inline::Python flattens arrays. Using an extra $ sigil in front of an array puts it into an extra scalar, and thus prevents the flattening.

    Now we can actually plot the number of commits by author, add a legend, and plot the result:

    for @top-authors -> $author {
        my @dates = %by-author{$author}.keys.sort;
        my @counts = %by-author{$author}{@dates};
        $subplots.plot:
            $[ @dates.map(&pydate) ],
            $@counts,
            label     => $author,
            marker    =>'.',
            linestyle => '';
    }
    
    
    $subplots.legend(loc=>'upper center', shadow=>True);
    
    plot('title', 'Contributions per day');
    plot('show');
    

    When run in the zef git repository, it produces this plot:

    Contributions to zef, a Perl 6 module installer

    Summary

    We've explored how to use the python library matplotlib to generate a plot from git contribution statistics. Inline::Python provides convenient functionality for accessing python libraries from Perl 6 code.

    In the next installment, we'll explore ways to improve both the graphics and the glue code between Python and Perl 6.

    Subscribe to the Perl 6 book mailing list

    * indicates required

    rakudo.org: Upgrade Information for Lexical require

    Published by Zoffix Znet on 2017-03-18T01:29:32

    Upgrade Information for Lexical require

    What’s Happening?

    Rakudo Compiler release 2017.03 includes the final piece of lexical module loading work: lexical require. This work was first announced in December, in http://rakudo.org/2016/12/17/lexical-module-loading/

    There are two changes that may impact your code:

    Upgrade Information

    Lexical Symbols

    WRONG:

    # WRONG:
    try { require Foo; 1 } and ::('Foo').new;
    

    The require above is inside a block and so its symbols won’t be available
    outside of it and the look up will fail.

    CHANGE TO:

    (try require Foo) !=== Nil and ::('Foo').new;
    

    Now the require installs the symbols into scope that’s lexically accessible
    to the ::('Foo') look up.

    Optional Loading

    WRONG:

    # WRONG:
    try require Foo;
    if ::('Foo') ~~ Failure {
        say "Failed to load Foo!";
    }
    

    This construct installs a package named Foo, which would be replaced by the
    loaded Foo if it were found, but if it weren’t, the package will remain a
    package, not a Failure, and so the above ~~ test will always be False.

    CHANGE TO:

    # Use return value to test whether loading succeeded:
    (try require Foo) === Nil and say "Failed to load Foo!";
    
    # Or use a run-time symbol lookup with require, to avoid compile-time
    # package installation:
    try require ::('Foo');
    if ::('Foo') ~~ Failure {
        say "Failed to load Foo!";
    }
    

    In the first example above, we test the return value of try isn’t Nil, since
    on successful loading it will be a Foo module, class, or package.

    The second example uses a run-time symbol lookup in require and so it never needs
    to install the package placeholder during the compile time. Therefore, the
    ::('Foo') ~~ test does work as intended.

    Help and More Info

    If you require help or more information, please join our chat channel
    #perl6 on irc.freenode.net

    6guts: Considering hyper/race semantics

    Published by jnthnwrthngtn on 2017-03-16T16:42:05

    We got a lot of nice stuff into Perl 6.c, the version of the language released on Christmas of 2015. Since then, a lot of effort has gone on polishing the things we already had in place, and also on optimization. By this point, we’re starting to think about Perl 6.d, the next language release. Perl 6 is defined by its test suite. Even before considering additional features, the 6.d test suite will tie down a whole bunch of things that we didn’t have covered in the 6.c one. In that sense, we’ve already got a lot done towards it.

    In this post, I want to talk about one of the things I’d really like to get nailed done as part of 6.d, and that is the semantics of hyper and race. Along with that I will, of course, be focusing on getting the implementation in much better shape. These two methods enable parallel processing of list operations. hyper means we can perform operations in parallel, but we must retain and respect ordering of results. For example:

    say (1, 9, 6).hyper.map(* + 5); # (6 14 11)

    Should always give the same results as if the hyper was not there, even it a thread computing 6 + 5 gave its result before that computing 1 + 5. (Obviously, this is not a particularly good real-world example, since the overhead of setting up parallel execution would dwarf doing 3 integer operations!) Note, however, that the order of side-effects is not guaranteed, so:

    (1..1000).hyper.map(&say);

    Could output the numbers in any order. By contrast, race is so keen to give you results that it doesn’t even try to retain the order of results:

    say (1, 9, 6).race.map(* + 5); # (14 6 11) or (6 11 14) or ...

    Back in 2015, when I was working on the various list handling changes we did in the run up to the Christmas release, my prototyping work included an initial implementation of the map operation in hyper and race mode, done primarily to figure out the API. This then escaped into Rakudo, and even ended up with a handful of tests written for it. In hindsight, that code perhaps should have been pulled out again, but it lives on in Rakudo today. Occasionally somebody shows a working example on IRC using the eval bot – usually followed by somebody just as swiftly showing a busted one!

    At long last, getting these fixed up and implemented more fully has made it to the top of my todo list. Before digging into the implementation side of things, I wanted to take a step back and work out the semantics of all the various operations that might be part of or terminate a hyper or race pipeline. So, today I made a list of those operations, and then went through every single one of them and proposed the basic semantics.

    The results of that effort are in this spreadsheet. Along with describing the semantics, I’ve used a color code to indicate where the result leaves you in the hyper or race paradigm afterwards (that is, a chained operation will also be performed in parallel).

    I’m sure some of these will warrant further discussion and tweaks, so feel free to drop me feedback, either on the #perl6-dev IRC channel or in the comments here.


    Perlgeek.de: What's a Variable, Exactly?

    Published by Moritz Lenz on 2017-03-11T23:00:01

    When you learn programming, you typically first learn about basic expressions, like 2 * 21, and then the next topic is control structures or variables. (If you start with functional programming, maybe it takes you a bit longer to get to variables).

    So, every programmer knows what a variable is, right?

    Turns out, it might not be that easy.

    Some people like to say that in ruby, everything is an object. Well, a variable isn't really an object. The same holds true for other languages.

    But let's start from the bottom up. In a low-level programming language like C, a local variable is a name that the compiler knows, with a type attached. When the compiler generates code for the function that the variable is in, the name resolves to an address on the stack (unless the compiler optimizes the variable away entirely, or manages it through a CPU register).

    So in C, the variable only exists as such while the compiler is running. When the compiler is finished, and the resulting executable runs, there might be some stack offset or memory location that corresponds to our understanding of the variable. (And there might be debugging symbols that allows some mapping back to the variable name, but that's really a special case).

    In case of recursion, a local variable can exist once for each time the function is called.

    Closures

    In programming languages with closures, local variables can be referenced from inner functions. They can't generally live on the stack, because the reference keeps them alive. Consider this piece of Perl 6 code (though we could write the same in Javascript, Ruby, Perl 5, Python or most other dynamic languages):

    sub outer() {
        my $x = 42;
        return sub inner() {
            say $x;
        }
    }
    
    my &callback = outer();
    callback();
    

    The outer function has a local (lexical) variable $x, and the inner function uses it. So once outer has finished running, there's still an indirect reference to the value stored in this variable.

    They say you can solve any problem in computer science through another layer of indirection, and that's true for the implementation of closures. The &callback variable, which points to a closure, actually stores two pointers under the hood. One goes to the static byte code representation of the code, and the second goes to a run-time data structure called a lexical pad, or short lexpad. Each time you invoke the outer function, a new instance of the lexpad is created, and the closure points to the new instance, and the always the same static code.

    But even in dynamic languages with closures, variables themselves don't need to be objects. If a language forbids the creation of variables at run time, the compiler knows what variables exist in each scope, and can for example map each of them to an array index, so the lexpad becomes a compact array, and an access to a variable becomes an indexing operation into that array. Lexpads generally live on the heap, and are garbage collected (or reference counted) just like other objects.

    Lexpads are mostly performance optimizations. You could have separate runtime representations of each variable, but then you'd have to have an allocation for each variable in each function call you perform, whereas which are generally much slower than a single allocation of the lexpad.

    The Plot Thickens

    To summarize, a variable has a name, a scope, and in languages that support it, a type. Those are properties known to the compiler, but not necessarily present at run time. At run time, a variable typically resolves to a stack offset in low-level languages, or to an index into a lexpad in dynamic languages.

    Even in languages that boldly claim that "everything is an object", a variable often isn't. The value inside a variable may be, but the variable itself typically not.

    Perl 6 Intricacies

    The things I've written above generalize pretty neatly to many programming languages. I am a Perl 6 developer, so I have some insight into how Perl 6 implements variables. If you don't resist, I'll share it with you :-).

    Variables in Perl 6 typically come with one more level of indirection, we which call a container. This allows two types of write operations: assignment stores a value inside a container (which again might be referenced to by a variable), and binding, which places either a value or a container directly into variable.

    Here's an example of assignment and binding in action:

    my $x;
    my $y;
    # assignment:
    $x = 42;
    $y = 'a string';
    
    say $x;     # => 42
    say $y;     # => a string
    
    # binding:
    $x := $y;
    
    # now $x and $y point to the same container, so that assigning to one
    # changes the other:
    $y = 21;
    say $x;     # => 21
    

    Why, I hear you cry?

    There are three major reasons.

    The first is that makes assignment something that's not special. For example in python, if you assign to anything other than a plain variable, the compiler translates it to some special method call (obj.attr = x to setattr(obj, 'attr', x), obj[idx] = x to a __setitem__ call etc.). In Perl 6, if you want to implement something you can assign to, you simply return a container from that expression, and then assignment works naturally.

    For example an array is basically just a list in which the elements are containers. This makes @array[$index] = $value work without any special cases, and allows you to assign to the return value of methods, functions, or anything else you can think of, as long as the expression returns a container.

    The second reason for having both binding and assignment is that it makes it pretty easy to make things read-only. If you bind a non-container into a variable, you can't assign to it anymore:

    my $a := 42;
    $a = "hordor";  # => Cannot assign to an immutable value
    

    Perl 6 uses this mechanism to make function parameters read-only by default.

    Likewise, returning from a function or method by default strips the container, which avoids accidental action-at-a-distance (though an is rw annotation can prevent that, if you really want it).

    This automatic stripping of containers also makes expressions like $a + 2 work, independently of whether $a holds an integer directly, or a container that holds an integer. (In the implementation of Perl 6's core types, sometimes this has to be done manually. If you ever wondered what nqp::decont does in Rakudo's source code, that's what).

    The third reason relates to types.

    Perl 6 supports gradual typing, which means you can optionally annotate your variables (and other things) with types, and Perl 6 enforces them for you. It detects type errors at compile time where possible, and falls back to run-time checking types.

    The type of a variable only applies to binding, but it inherits this type to its default container. And the container type is enforced at run time. You can observe this difference by binding a container with a different constraint:

    my Any $x;
    my Int $i;
    $x := $i;
    $x = "foo";     # => Type check failed in assignment to $i; expected Int but got Str ("foo")
    

    Int is a subtype of Any, which is why the binding of $i to $x succeeds. Now $x and $i share a container that is type-constrained to Int, so assigning a string to it fails.

    Did you notice how the error message mentions $i as the variable name, even though we've tried to assign to $x? The variable name in the error message is really a heuristic, which works often enough, but sometimes fails. The container that's shared between $x and $i has no idea which variable you used to access it, it just knows the name of the variable that created it, here $i.

    Binding checks the variable type, not the container type, so this code doesn't complain:

    my Any $x;
    my Int $i;
    $x := $i;
    $x := "a string";
    

    This distinction between variable type and container type might seem weird for scalar variables, but it really starts to make sense for arrays, hashes and other compound data structures that might want to enforce a type constraint on its elements:

    sub f($x) {
        $x[0] = 7;
    }
    my Str @s;
    f(@s);
    

    This code declares an array whose element all must be of type Str (or subtypes thereof). When you pass it to a function, that function has no compile-time knowledge of the type. But since $x[0] returns a container with type constraint Str, assigning an integer to it can produce the error you expect from it.

    Summary

    Variables typically only exists as objects at compile time. At run time, they are just some memory location, either on the stack or in a lexical pad.

    Perl 6 makes the understanding of the exact nature of variables a bit more involved by introducing a layer of containers between variables and values. This offers great flexibility when writing libraries that behaves like built-in classes, but comes with the burden of additional complexity.

    Zoffix Znet: Tag Your Dists

    Published on 2017-03-10T00:00:00

    Tags support in Perl 6 modules ecosystem

    Perlgeek.de: Perl 6 By Example: A Unicode Search Tool

    Published by Moritz Lenz on 2017-03-04T23:00:01

    This blog post is part of my ongoing project to write a book about Perl 6.

    If you're interested, either in this book project or any other Perl 6 book news, please sign up for the mailing list at the bottom of the article, or here. It will be low volume (less than an email per month, on average).


    Every so often I have to identify or research some Unicode characters. There's a tool called uni in the Perl 5 distribution App::Uni.

    Let's reimplement its basic functionality in a few lines of Perl 6 code and use that as an occasion to talk about Unicode support in Perl 6.

    If you give it one character on the command line, it prints out a description of the character:

    $ uni 🕐
    🕐 - U+1f550 - CLOCK FACE ONE OCLOCK
    

    If you give it a longer string instead, it searches in the list of Unicode character names and prints out the same information for each character whose description matches the search string:

    $ uni third|head -n5
    ⅓ - U+02153 - VULGAR FRACTION ONE THIRD
    ⅔ - U+02154 - VULGAR FRACTION TWO THIRDS
    ↉ - U+02189 - VULGAR FRACTION ZERO THIRDS
    ㆛ - U+0319b - IDEOGRAPHIC ANNOTATION THIRD MARK
    𐄺 - U+1013a - AEGEAN WEIGHT THIRD SUBUNIT
    

    Each line corresponds to what Unicode calls a "code point", which is usually a character on its own, but occasionally also something like a U+00300 - COMBINING GRAVE ACCENT, which, combined with a a - U+00061 - LATIN SMALL LETTER A makes the character à.

    Perl 6 offers a method uniname in both the classes Str and Int that produces the Unicode code point name for a given character, either in its direct character form, or in the form the code point number. With that, the first part of uni's desired functionality:

    #!/usr/bin/env perl6
    
    use v6;
    
    sub format-codepoint(Int $codepoint) {
        sprintf "%s - U+%05x - %s\n",
            $codepoint.chr,
            $codepoint,
            $codepoint.uniname;
    }
    
    multi sub MAIN(Str $x where .chars == 1) {
        print format-codepoint($x.ord);
    }
    

    Let's look at it in action:

    $ uni ø
    ø - U+000f8 - LATIN SMALL LETTER O WITH STROKE
    

    The chr method turns a code point number into the character and ord is the reverse, in other words: from character to code point number.

    The second part, searching in all Unicode character names, works by brute-force enumerating all possible characters and searching through their uniname:

    multi sub MAIN($search is copy) {
        $search.=uc;
        for 1..0x10FFFF -> $codepoint {
            if $codepoint.uniname.contains($search) {
                print format-codepoint($codepoint);
            }
        }
    }
    

    Since all character names are in upper case, the search term is first converted to upper case with $search.=uc, which is short for $search = $search.uc. By default, parameters are read only, which is why its declaration here uses is copy to prevent that.

    Instead of this rather imperative style, we can also formulate it in a more functional style. We could think of it as a list of all characters, which we whittle down to those characters that interest us, to finally format them the way we want:

    multi sub MAIN($search is copy) {
        $search.=uc;
        print (1..0x10FFFF).grep(*.uniname.contains($search))
                           .map(&format-codepoint)
                           .join;
    }
    

    To make it easier to identify (rather than search for) a string of more than one character, an explicit option can help disambiguate:

    multi sub MAIN($x, Bool :$identify!) {
        print $x.ords.map(&format-codepoint).join;
    }
    

    Str.ords returns the list of code points that make up the string. With this multi candidate of sub MAIN in place, we can do something like

    $ uni --identify øre
    ø - U+000f8 - LATIN SMALL LETTER O WITH STROKE
    r - U+00072 - LATIN SMALL LETTER R
    e - U+00065 - LATIN SMALL LETTER E
    

    Code Points, Grapheme Clusters and Bytes

    As alluded to above, not all code points are fully-fledged characters on their own. Or put another way, some things that we visually identify as a single character are actually made up of several code points. Unicode calls these sequences of one base character and potentially several combining characters as a grapheme cluster.

    Strings in Perl 6 are based on these grapheme clusters. If you get a list of characters in string with $str.comb, or extract a substring with $str.substr(0, 4), match a regex against a string, determine the length, or do any other operation on a string, the unit is always the grapheme cluster. This best fits our intuitive understanding of what a character is and avoids accidentally tearing apart a logical character through a substr, comb or similar operation:

    my $s = "ø\c[COMBINING TILDE]";
    say $s;         # ø̃
    say $s.chars;   # 1
    

    The Uni type is akin to a string and represents a sequence of codepoints. It is useful in edge cases, but doesn't support the same wealth of operations as Str. The typical way to go from Str to a Uni value is to use one of the NFC, NFD, NFKC, or NFKD methods, which yield a Uni value in the normalization form of the same name.

    Below the Uni level you can also represent strings as bytes by choosing an encoding. If you want to get from string to the byte level, call the encode method:

    my $bytes = 'Perl 6'.encode('UTF-8');
    

    UTF-8 is the default encoding and also the one Perl 6 assumes when reading source files. The result is something that does the Blob role; you can access individual bytes with positional indexing, such as $bytes[0]. The decode method helps you to convert a Blob to a Str.

    Numbers

    Number literals in Perl 6 aren't limited to the Arabic digits we are so used to in the English speaking part of the world. All Unicode code points that have the Decimal_Number (short Nd) property are allowed, so you can for example use Bengali digits:

    say ৪২;             # 42
    

    The same holds true for string to number conversions:

    say "৪২".Int;       # 42
    

    For other numeric code points you can use the unival method to obtain its numeric value:

    say "\c[TIBETAN DIGIT HALF ZERO]".unival;
    

    which produces the output -0.5 and also illustrates how to use a codepoint by name inside a string literal.

    Other Unicode Properties

    The uniprop method in type Str returns the general category by default:

    say "ø".uniprop;                            # Ll
    say "\c[TIBETAN DIGIT HALF ZERO]".uniprop;  # No
    

    The return value needs some Unicode knowledge in order to make sense of it, or one could read Unicode's Technical Report 44 for the gory details. Ll stands for Letter_Lowercase, No is Other_Number. This is what Unicode calls the General Category, but you can ask the uniprop (or uniprop-bool method if you're only interested in a boolean result) for other properties as well:

    say "a".uniprop-bool('ASCII_Hex_Digit');    # True
    say "ü".uniprop-bool('Numeric_Type');       # False
    say ".".uniprop("Word_Break");              # MidNumLet
    

    Collation

    Sorting strings starts to become complicated when you're not limited to ASCII characters. Perl 6's sort method uses the cmp infix operator, which does a pretty standard lexicographic comparison based on the codepoint number.

    If you need to use a more sophisticated collation algorithm, Rakudo 2017.02 and newer offer the Unicode Collation Algorithm as an experimental feature:

    my @list = <a ö ä Ä o ø>;
    say @list.sort;                     # (a o Ä ä ö ø)
    
    use experimental :collation;
    say @list.collate;                  # (a ä Ä o ö ø)
    $*COLLATION.set(:tertiary(False));
    say @list.collate;                  # (a Ä ä o ö ø)
    

    The default sort considers any character with diacritics to be larger than ASCII characters, because that's how they appear in the code point list. On the other hand, collate knows that characters with diacritics belong directly after their base character, which is not perfect in every language, but internally a good compromise.

    For Latin-based scripts, the primary sorting criteria is alphabetic, the secondary diacritics, and the third is case. $*COLLATION.set(:tertiary(False)) thus makes .collate ignore case, so it doesn't force lower case characters to come before upper case characters anymore.

    At the time of writing, language specification of collation is not yet implemented.

    Summary

    Perl 6 takes languages other than English very seriously, and goes to great lengths to facilitate working with them and the characters they use.

    This includes basing strings on grapheme clusters rather than code points, support for non-Arabic digits in numbers, and access to large parts of Unicode database through built-in methods.

    Subscribe to the Perl 6 book mailing list

    * indicates required

    rakudo.org: Advance Notice of Significant Changes

    Published by Zoffix Znet on 2017-02-26T15:59:30

    Advance Notice of Significant Changes

    As part of the IO grant run by The Perl Foundation, we’re improving our IO-related methods and subroutines. We’ve identified several of the changes that will have moderate
    impact on the users and may require you to update your code.

    The exact changes to be made will be known by March 18th, 2017 and the
    implementation will be part of the 2017.04 Rakudo compiler release on April,
    15th, which will be followed by the 2017.04 Rakudo Star Perl 6 release.

    Details

    Why Are We Changing Stuff?

    Our contract with the users is we don’t change anything that’s covered
    by the Perl 6.c language version tests. This means most of the language
    remains reliably stable, but IO features got short-changed on the love. The
    tests for them are sparse—a big reason why the IO grant is running in the
    first place—which gives core developers a lot of freedom to change them
    and to improve them.

    Despite that freedom, we realize broken code isn’t a nice thing, and will
    attempt to reduce the impact of the changes, by providing backwards compatible
    interface to support old API, where feasible. Where not, we will provide
    information of the upcoming changes; this notice is a part of that effort.

    What’s Changing?

    The grant covers all of IO routines and methods (excluding sockets). All of the
    final changes are yet to be deliberated and ratified and we’ll share the
    details once they’re known.

    Currently, it is speculated that link() routine will change the order
    of arguments (no backwards compatible support will be provided) and seek()
    routine will take seek reference as a named argument instead of an enum value
    (backwards compatible support will be provided).

    It’s very likely many more changes will be made. We’ll be using the code of
    all the modules in the ecosystem to judge the potential impact of the change
    and evaluate each change on a case-by-case basis.

    Timeline

    Help and More Info

    If you need help or more information, please join our IRC channel and ask there. You can also contact the person performing this work via Twitter @zoffix (Zoffix on IRC in #perl6-dev)

    Pawel bbkr Pabian: Your own template engine in 4 flavors. With Benchmarks!

    Published by Pawel bbkr Pabian on 2017-02-25T23:43:36

    This time on blog I'll show you how to write your own template engine - with syntax and behavior tailored for your needs. And we'll do it in four different ways to analyze pros and cons of each approach as well as code speed and complexity. Our sample task for today is to compose password reminder text for user, which can then be sent by email.

    use v6;
    
    

    my $template = q{
    Hi [VARIABLE person]!

    You can change your password by visiting [VARIABLE link] .

    Best regards.
    };

    my %fields = (
    'person' => 'John',
    'link' => 'http://example.com'
    );

    So we decided how our template syntax should look like and for starter we'll do trivial variables (although that's not very precise name because variables in templates are almost always immutable).
    We also have data to populate template fields. Let's get started!

    1. Substitutions

    sub substitutions ( $template is copy, %fields ) {
        for %fields.kv -> $key, $value {
            $template ~~ s:g/'[VARIABLE ' $key ']'/$value/;
        }
        return $template;
    }
    
    

    say substitutions($template, %fields);

    Yay, works:

        Hi John!
    
    

    You can change your password by visiting http://example.com .

    Best regards.

    Now it is time to benchmark it to get some baseline for different approaches:

    use Bench;
    
    

    my $template_short = $template;
    my %fields_short = %fields;

    my $template_long = join(
    ' lorem ipsum ', map( { '[VARIABLE ' ~ $_ ~ ']' }, 'a' .. 'z')
    ) x 100;
    my %fields_long = ( 'a' .. 'z' ) Z=> ( 'lorem ipsum' xx * );

    my $b = Bench.new;
    $b.timethese(
    1000,
    {
    'substitutions_short' => sub {
    substitutions( $template_short, %fields_short )
    },
    'substitutions_long' => sub {
    substitutions( $template_long, %fields_long )
    },
    }
    );

    Benchmark in this post will tests two cases for each approach. Our template from example is "short" case. And there is "long" case with 62KB template containing 2599 text fragments and 2600 variables filled by 26 fields. So here are the results:

    Timing 1000 iterations of substitutions_long, substitutions_short...
    substitutions_long: 221.1147 wallclock secs @ 4.5225/s (n=1000)
    substitutions_short: 0.1962 wallclock secs @ 5097.3042/s (n=1000)
    

    Whoa! That is a serious penalty for long templates. And the reason for that is because this code has three serious flaws - original template is destroyed during variables evaluation and therefore it must be copied each time we want to reuse it, template text is parsed multiple times and output is rewritten every time after populating each variable. But we can do better...

    2. Substitution

    sub substitution ( $template is copy, %fields ) {
        $template ~~ s:g/'[VARIABLE ' (\w+) ']'/{ %fields{$0} }/;
        return $template;
    }
    

    This time we have single substitution. Variable name is captured and we can use it to get field value on the fly. Benchmarks:

    Timing 1000 iterations of substitution_long, substitution_short...
    substitution_long: 71.6882 wallclock secs @ 13.9493/s (n=1000)
    substitution_short: 0.1359 wallclock secs @ 7356.3411/s (n=1000)
    

    Mediocre boost. We have less penalty on long templates because text is not parsed multiple times. However remaining flaws from previous approach still apply and regexp engine still must do plenty of memory reallocations for each piece of template text replaced.

    Also it won't allow our template engine to gain new features - like conditions or loops - in the future because it is very hard to parse nested tags in single regexp. Time for completely different approach...

    3. Grammars and direct Actions

    If you are not familiar with Perl 6 grammars and Abstract Syntax Tree concept you should study official documentation first.

    grammar Grammar {
        regex TOP { ^ [  |  ]* $ }
        regex text { <-[ [ ] >+ }
        regex variable { '[VARIABLE ' $=(\w+) ']' }
    }
    
    

    class Actions {

    has %.fields is required;

    method TOP ( $/ ) {
    make [~]( map { .made }, $/{'chunk'} );
    }
    method text ( $/ ) {
    make ~$/;
    }
    method variable ( $/ ) {
    make %.fields{$/{'name'}};
    }

    }

    sub grammar_actions_direct ( $template, %fields ) {
    my $actions = Actions.new( fields => %fields );
    return Grammar.parse($template, :$actions).made;
    }

    The most important thing is defining our template syntax as a grammar. Grammar is just a set of named regular expressions that can call each other. On "TOP" (where parsing starts) we see that our template is composed of chunks. Each chunk can be text or variable. Regexp for text matches everything until it hits variable start ('[' character, let's assume it is forbidden in text to make things more simple). Regexp for variable should look familiar from previous approaches, however now we capture variable name in named way instead of positional.

    Action class has methods that are called whenever regexp with corresponding name is matched. When called, method gets match object ($/) from this regexp and can "make" something from it. This "made" something will be seen by upper level method when it is called. For example our "TOP" regexp calls "text" regexp which matches "Hi " part of template and calls "text" method. This "text" method just "make"s this matched string for later use. Then "TOP" regexp calls "variable" regexp which matches "[VARIABLE name]" part of template. Then "variable" method is called and it checks in match object for variable name and "makes" value of this variable from %fields hash for later use. This continues until end of template string. Then "TOP" regexp is matched and "TOP" method is called. This "TOP" method can access array of text or variable "chunks" in match object and see what was "made" for those chunks earlier. So all it has to do is to "make" those values concatenated together. And finally we get this "made" template from "parse" method. So let's look at benchmarks:

    Timing 1000 iterations of grammar_actions_direct_long, grammar_actions_direct_short...
    grammar_actions_direct_long: 149.5412 wallclock secs @ 6.6871/s (n=1000)
    grammar_actions_direct_short: 0.2405 wallclock secs @ 4158.1981/s (n=1000)
    

    We got rid of two more flaws from previous approaches. Original template is not destroyed when fields are filled and that means less memory copying. There is also no reallocation of memory during substitution of each field because now every action method just "make"s strings to be joined later. And we can easily extend our template syntax by adding loops, conditions and more features just by throwing some regexps into grammar and defining corresponding behavior in actions. Unfortunately we see some performance regression and this happens because every time template is processed it is parsed, match objects are created, parse tree is built and it has to track all those "make"/"made" values when it is collapsed to final output. But that was not our final word...

    4. Grammars and closure Actions

    Finally we reached "boss level", where we have to exterminate last and greatest flaw - re-parsing.
    The idea is to use grammars and actions like in previous approach, but this time instead of getting direct output we want to generate executable and reusable code that works like this under the hood:

    sub ( %fields ) {
        return join '',
            sub ( %fields ) { return "Hi "}.( %fields ),
            sub ( %fields ) { return %fields{'person'} }.( %fields ),
            ...
    }
    

    That's right, we will be converting our template body to a cascade of subroutines.
    Each time this cascade is called it will get and propagate %fields to deeper subroutines.
    And each subroutine is responsible for handling piece of template matched by single regexp in grammars. We can reuse grammar from previous approach and modify only actions:

    class Actions {
        
        method TOP ( $/ ) {
            my @chunks = $/{'chunk'};
            make sub ( %fields ) {
               return [~]( map { .made.( %fields ) }, @chunks );
            };
        }
        method text ( $/ ) {
            my $text = ~$/;
            make sub ( %fields ) {
                return $text;
            };
        }
        method variable ( $/ ) {
            my $name = $/{'name'};
            make sub ( %fields  ) {
                return %fields{$name}
            };
        }
        
    }
    
    

    sub grammar_actions_closures ( $template, %fields ) {
    state %cache{Str};
    my $closure = %cache{$template} //= Grammar.parse(
    $template, actions => Actions.new
    ).made;
    return $closure( %fields );
    }

    Now every action method instead of making final output makes a subroutine that will get %fields and do final output later. To generate this cascade of subroutines template must be parsed only once. Once we have it we can call it with different set of %fields to populate in our template variables. Note how Object Hash %cache is used to determine if we already have subroutines tree for given $template. Enough talking, let's crunch some numbers:

    Timing 1000 iterations of grammar_actions_closures_long, grammar_actions_closures_short...
    grammar_actions_closures_long: 22.0476 wallclock secs @ 45.3563/s (n=1000)
    grammar_actions_closures_short: 0.0439 wallclock secs @ 22778.8885/s (n=1000)
    

    Nice result! We have extensible template engine that is 4 times faster for short templates and 10 times faster for long templates than our initial approach. And yes, there is bonus level...

    4.1. Grammars and closure Actions in parallel

    Last approach opened a new optimization possibility. If we have subroutines that will generate our template why not run them in parallel? So let's modify our action "TOP" method to process text and variable chunks simultaneously:

    method TOP ( $/ ) {
        my @chunks = $/{'chunk'};
        make sub ( %fields ) {
           return [~]( @chunks.hyper.map( {.made.( %fields ) } ).list );
        };
    }
    

    Such optimization will shine if your template engine must do some lengthy operations to generate chunk of final output, for example execute heavy database query or call some API. It is perfectly fine to ask for data on the fly to populate template, because in feature rich template engine you may not be able to predict and generate complete set of data needed beforehand, like we did with our %fields. Use this optimization wisely - for fast subroutines you will see a performance drop because cost of sending and retrieving chunks to/from threads will be higher that just executing them in serial on single core.

    Which approach should I use to implement my own template engine?

    That depends how much you can reuse templates. For example if you send one password reminder per day - go for simple substitution and reach for grammar with direct actions if you need more complex features. But if you are using templates for example in PSGI processes to display hundreds of pages per second for different users then grammar and closure actions approach wins hands down.

    You can download all approaches with benchmarks in single file here.

    To be continued?

    If you like this brief introduction to template engines and want to see more complex features like conditions of loops implemented leave a comment under this article on blogs.perl.org or send me a private message on irc.freenode.net #perl6 channel (nick: bbkr).

    brrt to the future: Register Allocator Update

    Published by Bart Wiegmans on 2017-02-09T16:19:00

    Hi everybody, I thought some yof you might be interested in an update regarding the JIT register allocator, which is after all the last missing piece for the new 'expression' JIT backend. Well, the last complicated piece, at least. Because register allocation is such a broad topic, I don't expect to cover all topics relevant to design decisions here, and reserve a future post for that purpose.

    I think I may have mentioned earlier that I've chosen to implement linear scan register allocation, an algorithm first described in 1999. Linear scan is relatively popular for JIT compilers because it achieves reasonably good allocation results while being considerably simpler and faster than the alternatives, most notably via graph coloring (unfortunately no open access link available). Because optimal register allocation is NP-complete, all realistic algorithms are heuristic, and linear scan applies a simple heuristic to good effect. I'm afraid fully explaining the nature of that heuristic and the tradeoffs involves is beyond the scope of this post, so you'll have to remind me to do it at a later point.

    Commit ab077741 made the new allocator the default after I had ironed out sufficient bugs to be feature-equivalent to the old allocator (which still exists, although I plan to remove it soon).
    Commit 0e66a23d introduced support for 'PHI' node merging, which is really important and exciting to me, so I'll have to explain what it means. The expression JIT represents code in a form in which all values are immutable, called single static assignment form, or SSA form shortly. This helps simplify compilation because there is a clear correspondence between operations and the values they compute. In general in compilers, the easier it is to assert something about code, the more interesting things you can do with it, and the better code you can compile. However, in real code, variables are often assigned more than one value. A PHI node is basically an 'escape hatch' to let you express things like:

    int x, y;
    if (some_condition()) {
    x = 5;
    } else {
    x = 10;
    }
    y = x - 3;

    In this case, despite our best intentions, x can have two different values. In SSA form, this is resolved as follows:

    int x1, x2, x3, y;
    if (some_condition()) {
    x1 = 5;
    } else {
    x2 = 10;
    }
    x3 = PHI(x1,x2);
    y = x3 - 3;

    The meaning of the PHI node is that it 'joins together' the values of x1 and x2 (somewhat like a junction in perl6), and represents the value of whichever 'version' of x was ultimately defined. Resolving PHI nodes means ensuring that, as far as the register allocator is concerned, x1, x2, and x3 should preferably be allocated to the same register (or memory location), and if that's not possible, it should copy x1 and x2 to x3 for correctness. To find the set of values that are 'connected' via PHI nodes, I apply a union-find data structure, which is a very useful data structure in general. Much to my amazement, that code worked the first time I tried it.

    Then I had to fix a very interesting bug in commit 36f1fe94 which involves ordering between 'synthetic' and 'natural' tiles. (Tiles are the output of the tiling process about which I've written at some length, they represent individual instructions). Within the register allocator, I've chosen to identify tiles / instructions by their index in the code list, and to store tiles in a contiguous array. There are many advantages to this strategy but they are also beyond the scope of this post. One particular advantage though is that the indexes into this array make their relative order immediately apparent. This is relevant to linear scan because it relies on relative order to determine when to allocate a register and when a value is no longer necessary.

    However, because of using this index, it's not so easy to squeeze in new tiles to that array - which is exactly what a register allocator does, when it decides to 'spill' a value to memory and load it when needed. (Because inserts are delayed and merged into the array a single step, the cost of insertion is constant). Without proper ordering, a value loaded from memory could overwrite another value that is still in use. The fix for that is, I think, surprisingly simple and elegant. In order to 'make space' for the synthetic tiles, before comparison all indexes are multiplied by a factor of 2, and synthetic tiles are further offset by -1 or +1, depending on whether they should be handled before or after the 'natural' tile they are inserted for. E.g. synthetic tiles that load a value should be processed before the tile that uses the value they load.

    Another issue soon appeared, this time having to do with x86 being, altogether, quaint and antiquated and annoying, and specifically with the use of one operand register as source and result value. To put it simply, where you and I and the expression JIT structure might say:

    a = b + c

    x86 says:

    a = a + b

    Resolving the difference is tricky, especially for linear scan, since linear scan processes the values in the program rather than the instructions that generate them. It is therefore not suited to deal with instruction-level constraints such as these. If a, b, and c in my example above are not the same (not aliases), then this can be achieved by a copy:

    a = b
    a = a + c

    If a and b are aliases, the first copy isn't necessary. However, if a and c are aliases, then a copy may or may not be necessary, depending on whether the operation (in this case '+') is commutative, i.e. it holds for '+' but not for '-'. Commit 349b360 attempts to fix that for 'direct' binary operations, but a fix for indirect operations is still work in progress. Unfortunately, it meant I had to reserve a register for temporary use to resolve this, meaning there are fewer available for the register allocator to use. Fortunately, that did simplify handling of a few irregular instructions, e.g. signed cast of 32 bit integers to 64 bit integers.

    So that brings us to today and my future plans. The next thing to implement will be support for function calls by the register allocator, which involves shuffling values to the right registers and correct positions on the stack, and also in spilling all values that are still required after the function call since the function may overwrite them. This requires a bit of refactoring of the logic that spills variables, since currently it is only used when there are not enough registers available. I also need to change the linear scan main loop, because it processes values in order of first definition, and as such, instructions that don't create any values are skipped, even if they need special handling like function calls. I'm thinking of solving that with a special 'interesting tiles' queue that is processed alongside the main values working queue.

    That was it for today. I hope to write soon with more progress.

    Strangely Consistent: Deep Git

    Published by Carl Mäsak

    I am not good at chess.

    I mean... "I know how the pieces move". (That's the usual phrase, isn't it?) I've even tried to understand chess better at various points in my youth, trying to improve my swing. I could probably beat some of you other self-identified "I know how the pieces move" folks out there. With a bit of luck. As long as you don't, like, cheat by having a strategy or something.

    I guess what I'm getting at here is that I am not, secretly, an international chess master. OK, now that's off my chest. Phew!

    Imagining what it's like to be really good at chess is very interesting, though. I can say with some confidence that a chess master never stops and asks herself "wait — how does the knight piece move, again?" Not even I do that! Obviously, the knight piece is the one that moves √5 distances on the board. 哈哈

    I can even get a sense of what terms a master-level player uses internally, by reading what master players wrote. They focus on tactics and strategy. Attacks and defenses. Material and piece values. Sacrifices and piece exchange. Space and control. Exploiting weaknesses. Initiative. Openings and endgames.

    Such high-level concerns leave the basic mechanics of piece movements far behind. Sure, those movements are in there somewhere. They are not irrelevant, of course. They're just taken for granted and no longer interesting in themselves. Meanwhile, the list of master-player concerns above could almost equally well apply to a professional Go player. (s:g/piece/stone/ for Go.)

    Master-level players have stopped looking at individual trees, and are now focusing on the forest.

    The company that employs me (Edument) has a new slogan. We've put it on the backs of sweaters which we then wear to events and conferences:

    We teach what you can't google.

    I really like this new slogan. Particularly, it feels like something we as a teaching company have already trended towards for a while. Some things are easy to find just by googling them, or finding a good cheat sheet. But that's not why you attend a course. We should position ourselves so as to teach answers to the deep, tricky things that only emerge after using something for a while.

    You're starting to see how this post comes together now, aren't you? 😄

    2017 will be my ninth year with Git. I know it quite well by now, having learned it in depth and breadth along the way. I can safely say that I'm better at Git than I am at chess at this point.

    Um. I'm most certainly not an international Git grandmaster — but largely that's because such a title does not exist. (If someone reads this post and goes on to start an international Git tournament, I will be very happy. I might sign up.)

    No, my point is that the basic commands have taken on the role for me that I think basic piece movements have taken on for chess grandmasters. They don't really matter much; they're a means to an end, and it's the end that I'm focusing on when I type them.

    (Yes, I still type them. There are some pretty decent GUIs out there, but none of them give me the control of the command line. Sorry-not-sorry.)

    Under this analogy, what are the things I value with Git, if not the commands? What are the higher-level abstractions that I tend to think in terms of nowadays?

    (Yes, these are the ACID guarantees for database transactions, but made to work for Git instead.)

    A colleague of mine talks a lot about "definition of done". It seems to be a Scrum thing. It's his favorite term more than mine, but I still like it for its attempt at "mechanizing" quality, which I believe can succeed in a large number of situations.

    Another colleague of mine likes the Boy Scout Rule of "Always leave the campground cleaner than you found it". If you think of this in terms of code, it means something like refactoring a code base as you go, cleaning it up bit by bit and asymptotically approaching code perfection. But if you think of it in terms of process, it dovetails quite nicely with the "definition of done" above.

    Instead of explaining how in the abstract, let's go through a concrete-enough example:

    1. Some regression is discovered. (Usually by some developer dogfooding the system.)
    2. If it's not immediately clear, we bisect and find the offending commit.
    3. ASAP, we revert that commit.
    4. We analyze the problematic part of the reverted commit until we understand it thoroughly. Typically, the root cause will be something that was not in our definition of done, but should've been.
    5. We write up a new commit/branch with the original (good) functionality restored, but without the discovered problem.
    6. (Possibly much later.) We attempt to add discovery of the problem to our growing set of static checks. The way we remember to do that is through a TODO list in a wiki. This list keeps growing and shrinking in fits and starts.

    Note in particular the interplay between process, quality and, yes, Git. Someone could've told me at the end of step 6 that I had totalled 29 or so Git basic commands along the way, and I would've believed them. But that's not what matters to us as a team. If we could do with magic pixie dust what we do with Git — keep historic snapshots of the code while ensuring quality and isolation — we might be satisfied magic pixie dust users instead.

    Somewhere along the way, I also got a much more laid-back approach to conflicts. (And I stopped saying "merge conflicts", because there are also conflicts during rebase, revert, cherry-pick, and stash — and they are basically the same deal.) A conflict happens when a patch P needs to be applied in an environment which differs too much from the one in which P was created.

    Aside: in response to this post, jast++ wrote this on #perl6: "one minor nitpick: git knows two different meanings for 'merge'. one is commit-level merge, one is file-level three-way merge. the latter is used in rebase, cherry-pick etc., too, so technically those conflicts can still be called merge conflicts. :)" — TIL.

    But we actually don't care so much about conflicts. Git cares about conflicts, becuase it can't just apply the patch automatically. What we care about is that the intent of the patch has survived. No software can check that for us. Since the (conflict ↔ no conflict) axis is independent from the (intent broken ↔ intent preserved) axis, we get four cases in total. Two of those are straightforward, because the (lack of) conflict corresponds to the (lack of) broken intent.

    The remaining two cases happen rarely but are still worth thinking about:

    If we care about quality, one lesson emerges from mst's example: always run the tests after you merge and after you've resolved conflicts. And another lesson from my example: try to introduce automatic checks for structures and relations in the code base that you care about. In this case, branch A could've put in a test or a linting step that failed as soon as it saw something according to the old naming convention.

    A lot of the focus on quality also has to do with doggedly going to the bottom of things. It's in the nature of failures and exceptional circumstances to clump together and happen at the same time. So you need to handle them one at a time, carefully unraveling one effect at a time, slowly disassembling the hex like a child's rod puzzle. Git sure helps with structuring and linearizing the mess that happens in everyday development, exploration, and debugging.

    As I write this, I realize even more how even when I try to describe how Git has faded into the background as something important-but-uninteresting for me, I can barely keep the other concepts out of focus. Quality being chief among them. In my opinion, the focus on improving not just the code but the process, of leaving the campground cleaner than we found it, those are the things that make it meaningful for me to work as a developer even decades later. The feeling that code is a kind of poetry that punches you back — but as it does so, we learn something valuable for next time.

    I still hear people say "We don't have time to write tests!" Well, in our teams, we don't have time not to write tests! Ditto with code review, linting, and writing descriptive commit messages.

    No-one but Piet Hein deserves the last word of this post:

    The road to wisdom? — Well, it's plain
    and simple to express:

    Err
    and err
    and err again
    but less
    and less
    and less.

    Death by Perl6: Hello Web! with Purée Perl 6

    Published by Tony O'Dell on 2017-01-09T18:19:56

    Let's build a website.

    Websites are easy to build. There are dozens of frameworks out there to use, perl has Mojo and Catalyst as its major frameworks and other languages also have quite a few decent options. Some of them come with boilerplate templates and you just go from there. Others don't and you spend your first few hours learning how to actually set up the framework and reading about how to share your DB connection between all of your controllers and blah, blah, blah. Let's look at one of P6's web frameworks.

    Enter Hiker

    Hiker doesn't introduce a lot of (if any) new ideas. It does use paradigms you're probably used to and it aims to make the initialization of creating your website very straight forward and easy, that way you can get straight to work sharing your content with the English.

    The Framework

    Hiker is intended to make things fast and easy from the development side. Here's how it works. If you're not into the bleep blop and just want to get started, skip to the Boilerplate heading.

    Application Initialization

    1. Hiker reads from the subdirectories we'll look at later. The controllers and models are classes.
    2. Looking at all controllers, initializes a new object for that class, and then checks for their .path attribute
      1. If Hiker can't find the path attribute then it doesn't bind anything and produces a warning
    3. After setting up the controller routes, it instantiates a new object for the model as specified by the controller (.model)
      1. If none is given by the controller then nothing is instantiated or bound and nothing happens
      2. If a model is required by the controller but it cannot be found then Hiker refuses to bind
    4. Finally, HTTP::Server::Router is alerted to all of the paths that Hiker was able to find and verify

    The Request

    1. If the path is found, then the associated class' .model.bind is called.
      1. The response (second parameter of .model.bind($req, $res)) has a hash to store information: $res.data
    2. The controller's .handler($req, $res) is then executed
      1. The $res.data hash is available in this context
    3. If the handler returns a Promise then Hiker waits for that to be kept (and expects the result to be True or False)
      1. If the response is already rendered and the Promise's status is True then the router is alerted that no more routes should be explored
      2. If the response isn't rendered and the Promise's result is True, then .render is called automagically for you
      3. If the response isn't rendered and the Promise's result is False, then the next matching route is called

    Boilerplate

    Ensure you have Hiker installed:

    $ zef install Hiker
    $ rakudobrew rehash #this may be necessary to get the bin to work
    

    Create a new directory where you'd like to create your project's boilerplate and cd. From here we'll initialize some boilerplate and look at the content of the files.

    somedir$ hiker init  
    ==> Creating directory controllers
    ==> Creating directory models
    ==> Creating directory templates
    ==> Creating route MyApp::Route1: controllers/Route1.pm6
    ==> Creating route MyApp::Model1: models/Model1.pm6
    ==> Creating template templates/Route1.mustache
    ==> Creating app.pl6
    

    Neato burrito. From the output you can see that Hiker created some directories - controllers, models, templates - for us so we can start out organized. In those directories you will find a few files, let's take a look.

    The Model

    use Hiker::Model; 
    
    class MyApp::Model1 does Hiker::Model {  
      method bind($req, $res) {
        $res.data<who> = 'web!';
      }
    }  
    

    Pretty straight forward. MyApp::Model1 is instantiated during Hiker initialization and .bind is called whenever the controller's corresponding path is requested. As you can see here, this Model just adds to the $res.data hash the key value pair of who => 'web!'. This data will be available in the Controller as well as available in the template files (if the controller decides to use that).

    The Controller

    use Hiker::Route; 
    
    class MyApp::Route1 does Hiker::Route {  
      has $.path     = '/';
      has $.template = 'Route1.mustache';
      has $.model    = 'MyApp::Model1';
    
      method handler($req, $res) {
        $res.headers<Content-Type> = 'text/plain';
      }
    }  
    

    As you can see above, the Hiker::Route has a lot of information in a small space and it's a class that does a Hiker role called Hiker::Route. This let's our framework know that we should inspect that class for the path, template, model so it can handle those operations for us - path and template are the only required attributes.

    As discussed above, our Route can return a Promise if there is some asynchronous operation that is to be performed. In this case all we're going to do is set the header's to indicated the Content Type and then, automagically, render the template file. Note: if you return a Falsey value from the handler method, then the router will not auto render and it will attempt to find the next route. This is so that you can cascade paths in the event that you want to chain them together, do some type of decision making real time to determine whether that's the right class for the request, or perform some other unsaid dark magic. In the controller above we return a Truethy value and it auto renders.

    By specifying the Model in the Route, you're able to re-use the same Model class across multiple routes.

    The Path

    Quick notes about .path. You can pass a ('/staticpath'), maybe a path with a placeholder ('/api/:placeholder'), or if you're path is a little more complicated then you can pass in a regex (/ .* /). Check out the documentation for HTTP::Server::Router (repo).

    The Template

    The template is specified by the controller's .template attribute and Hiker checks for that file in the ./templates folder. The default template engine is Template::Mustache (repo). See that module's documentation for more info.

    Running the App

    Really pretty straight forward from the boilerplate:

    somedir$ perl6 app.pl6  
    

    Now you can visit http://127.0.0.1:8080/ in your favorite Internet Explorer and find a nice 'Hello web!' message waiting to greet you. If you visit any other URI you'll receive the default 'no route found' message from HTTP::Server::Router.

    The Rest

    The module is relatively young. With feedback from the community, practical applications, and some extra feature expansion, Hiker could be pretty great and it's a good start to taking the tediousness out of building a website in P6. I'm open to feedback and I'd love to hear/see where you think Hiker can be improved, what it's missing to be productive, and possibly anything else [constructive or otherwise] you'd like to see in a practical, rapid development P6 web server.

    Steve Mynott: Rakudo Star: Past Present and Future

    Published by Steve Mynott on 2017-01-02T14:07:31

    At YAPC::EU 2010 in Pisa I received a business card with "Rakudo Star" and the
    date July 29, 2010 which was the date of the first release -- a week earlier
    with a countdown to 1200 UTC. I still have mine, although it has a tea stain
    on it and I refreshed my memory over the holidays by listening again to Patrick
    Michaud speaking about the launch of Rakudo Star (R*):

    https://www.youtube.com/watch?v=MVb6m345J-Q

    R* was originally intended as first of a number of distribution releases (as
    opposed to a compiler release) -- useable for early adopters but not initially production
    Quality. Other names had been considered at the time like Rakudo Beta (rejected as
    sounding like "don't use this"!) and amusingly Rakudo Adventure Edition.
    Finally it became Rakudo Whatever and Rakudo Star (since * means "whatever"!).

    Well over 6 years later and we never did come up with a better name although there
    was at least one IRC conversation about it and perhaps "Rakudo Star" is too
    well established as a brand at this point anyway. R* is the Rakudo compiler, the main docs, a module installer, some modules and some further docs.

    However, one radical change is happening soon and that is a move from panda to
    zef as the module installer. Panda has served us well for many years but zef is
    both more featureful and more actively maintained. Zef can also install Perl
    6 modules off CPAN although the CPAN-side support is in its early days. There
    is a zef branch (pull requests welcome!) and a tarball at:

    http://pl6anet.org/drop/rakudo-star-2016.12.zef-beta2.tar.gz

    Panda has been patched to warn that it will be removed and to advise the use of
    zef. Of course anyone who really wants to use panda can reinstall it using zef
    anyway.

    The modules inside R* haven't changed much in a while. I am considering adding
    DateTime::Format (shown by ecosystem stats to be widely used) and
    HTTP::UserAgent (probably the best pure perl6 web client library right now).
    Maybe some modules should also be removed (although this tends to be more
    controversial!). I am also wondering about OpenSSL support (if the library is
    available).

    p6doc needs some more love as a command line utility since most of the focus
    has been on the website docs and in fact some of these changes have impacted
    adversely on command line use, eg. under Windows cmd.exe "perl 6" is no longer
    correctly displayed by p6doc. I wonder if the website generation code should be
    decoupled from the pure docs and p6doc command line (since R* has to ship any
    new modules used by the website). p6doc also needs a better and faster search
    (using sqlite?). R* also ships some tutorial docs including a PDF generated from perl6intro.com.
    We only ship the English one and localisation to other languages could be
    useful.

    Currently R* is released roughly every three months (unless significant
    breakage leads to a bug fix release). Problems tend to happen with the
    less widely used systems (Windows and the various BSDs) and also with the
    module installers and some modules. R* is useful in spotting these issues
    missed by roast. Rakudo itself is still in rapid development. At some point a less frequently
    updated distribution (Star LTS or MTS?) will be needed for Linux distribution
    packagers and those using R* in production). There are also some question
    marks over support for different language versions (6.c and 6.d).

    Above all what R* (and Rakudo Perl 6 in general) needs is more people spending
    more time working on it! JDFI! Hopefully this blog post might
    encourage more people to get involved with github pull requests.

    https://github.com/rakudo/star

    Feedback, too, in the comments below is actively encouraged.


    Perl 6 Advent Calendar: Day 24 – Make It Snow

    Published by ab5tract on 2016-12-24T13:14:02

    Hello again, fellow sixers! Today I’d like to take the opportunity to highlight a little module of mine that has grown up in some significant ways this year. It’s called Terminal::Print and I’m suspecting you might already have a hint of what it can do just from the name. I’ve learned a lot from writing this module and I hope to share a few of my takeaways.

    Concurrency is hard

    Earlier in the year I decided to finally try to tackle multi-threading in Terminal::Print and… succeeded more or less, but rather miserably. I wrapped the access to the underlying grid (a two-dimensional array of Cell objects) in a react block and had change-cell and print-cell emit their respective actions on a Supply. The react block then handled these actions. Rather slowly, unfortunately.

    Yet, there was hope. After jnthn++ fixed a constructor bug in OO::Monitors I was able to remove all the crufty hand-crafted handling code and instead ensure that method calls to the Terminal::Print::Grid object would only run in a single thread at any given time. (This is the class which holds the two-dimensional array mentioned before and was likewise the site of my react block experiment).

    Here below are the necessary changes:

    - unit class Terminal::Print::Grid;
    + use OO::Monitors;
    + unit monitor Terminal::Print::Grid;
    

    This shift not only improved the readability and robustness of the code, it was significantly faster! Win! To me this is really an amazing dynamic of Perl 6. jnthn’s brilliant, twisted mind can write a meta-programming module that makes it dead simple for me to add concurrency guarantees at a specific layer of my library. My library in turn makes it dead simple to print from multiple threads at once on the screen! It’s whipuptitude enhancers all the the way down!

    That said, our example today will not be writing from multiple threads. For some example code that utilizes async, I point you to examples/async.p6 and examples/matrix-ish.p6.

    Widget Hero

    Terminal::Print is really my first open source library in the sense that it is the first time that I have started my own module from scratch with the specific goal of filling a gap in a given language’s ecosystem. It is also the first time that I am not the sole contributor! I would be remiss not to mention japhb++ in this post, who has contributed a great deal in a relatively short amount of time.

    In addition to all the performance related work and the introduction of a span-oriented printing mechanism, japhb’s work on widgets especially deserves its own post! For now let’s just say that it has been a real pleasure to see the codebase grow and extend even as I have been too distracted to do much good. My takeaway here is a new personal milestone in my participation in libre/open software (my first core contributor!) that reinforces all the positive dynamics it can have on a code base.

    Oh, and I’ll just leave this here as a teaser of what the widget work has in store for us:

    rpg-ui-p6

    You can check it out in real-time and read the code at examples/rpg-ui.p6.

    Snow on the Golf Course

    Now you are probably wondering, where is the darn, snow! Well, here we go! The full code with syntax highlighting is available in examples/snowfall.p6. I will be going through it step by step below.

    use Terminal::Print;
    
    class Coord {
        has Int $.x is rw where * <= T.columns = 0;
        has Int $.y is rw where * <= T.rows = 0 ;
    }
    

    Here we import Terminal::Print. The library takes the position that when you import it somewhere, you are planning to print to the screen. To this end we export an instantiated Terminal::Print object into the importer’s lexical scope as T. This allows me to immediately start clarifying the x and y boundaries of our coordinate system based on run-time values derived from the current terminal window.

    class Snowflake {
        has $.flake = ('❆','❅','❄').roll;
        has $.pos = Coord.new;
    }
    
    sub create-flake {
        state @cols = ^T.columns .pick(*); # shuffled
        if +@cols > 0 {
            my $rand-x = @cols.pop;
            my $start-pos = Coord.new: x => $rand-x;
            return Snowflake.new: pos => $start-pos;
        } else {
            @cols = ^T.columns .pick(*);
            return create-flake;
        }
    }
    

    Here we create an extremely simple Snowflake class. What is nice here is that we can leverage the default value of the $.flake attribute to always be random at construction time.

    Then in create-flake we are composing a way to make sure we have hit every x coordinate as a starting point for the snowfall. Whenever create-flake gets called, we pop a random x coordinate out of the @cols state variable. The state variable enables this cool approach because we can manually fill @cols with a new randomized set of our available x coordinates once it is depleted.

    draw( -> $promise {
    
    start {
        my @flakes = create-flake() xx T.columns;
        my @falling;
        
        Promise.at(now + 33).then: { $promise.keep };
        loop {
            # how fast is the snowfall?
            sleep 0.1; 
        
            if (+@flakes) {
                # how heavy is the snowfall?
                my $limit = @flakes > 2 ?? 2            
                                        !! +@flakes;
                # can include 0, but then *cannot* exclude $limit!
                @falling.push: |(@flakes.pop xx (0..$limit).roll);  
            } else {
                @flakes = create-flake() xx T.columns;
            }
        
            for @falling.kv -> $idx, $flake {
                with $flake.pos.y -> $y {
                    if $y > 0 {
                        T.print-cell: $flake.pos.x, ($flake.pos.y - 1), ' ';
                    }
    
                    if $y < T.rows {
                        T.print-cell: $flake.pos.x, $flake.pos.y, $flake.flake;            
                    }
    
                    try {
                        $flake.pos.y += 1;
                        CATCH {
                            # the flake has fallen all the way
                            # remove it and carry on!
                            @falling.splice($idx,1,());
                            .resume;
                        }
                    }
                }
            }
        }
    }
    
    });
    

    Let’s unpack that a bit, shall we?

    So the first thing to explain is draw. This is a handy helper routine that is also imported into the current lexical scope. It takes as its single argument a block which accepts a Promise. The block should include a start block so that keeping the argument promise works as expected. The implementation of draw is shockingly simple.

    So draw is really just short-hand for making sure the screen is set up and torn down properly. It leverages promises as (I’m told) a “conv-var” which according to the Promises spec might be an abuse of promises. I’m not very futzed about it, to be honest, since it suits my needs quite well.

    This approach also makes it quite easy to create a “time limit” for the snowfall by scheduling a promise to be kept at now + 33 — thirty three seconds from when the loop starts. then we keep the promise and draw shuts down the screen for us. This makes “escape” logic for your screensavers quite easy to implement (note that SIGINT also restores your screen properly. The most basic exit strategy works as expected, too :) ).

    The rest is pretty straightforward, though I’d point to the try block as a slightly clever (but not dangerously so) combination of where constraints on Coord‘s attributes and Perl 6’s resumable exceptions.

    Make it snow!

    And so, sixers, I bid you farewell for today with a little unconditional love from ab5tract’s curious little corner of the universe. Cheers!

    snowfall-p6


    Perl 6 Advent Calendar: Day 24 – One Year On

    Published by liztormato on 2016-12-24T10:51:57

    This time of year invites one to look back on things that have been, things that are and things that will be.

    Have Been

    I was reminded of things that have been when I got my new notebook a few weeks ago. Looking for a good first sticker to put on it, I came across an old ActiveState sticker:

    If you don’t know Perl
    you don’t know Dick

    A sticker from 2000! It struck me that that sticker was as old as Perl 6. Only very few people now remember that a guy called Dick Hardt was actually the CEO of ActiveState at the time. So even though the pun may be lost on most due to the mists of time, the premise still rings true to me: that Perl is more about a state of mind, then about versions. There will always be another version of Perl. Those who don’t know Perl are doomed to re-implement it, poorly. Which, to me, is why so many ideas were borrowed from Perl. And still are!

    Are

    Where are we now? Is it the moment we know, We know, we know? I don’t think we are at twenty thousand people using Perl 6 just yet. But we’re keeping our fingers crossed. Just in case.

    We are now 12 compiler releases after the initial Christmas release of Perl 6. In this year, many, many areas of Rakudo Perl 6 and MoarVM have dramatically improved in speed and stability. Our canary-in-the-coalmine test has dropped from around 14 seconds a year ago to around 5.5 seconds today. A complete spectest run is now about 3 minutes, whereas it was about 4.5 minutes at the beginning of the year, while about 4000 tests were added (from about 50K to 54K). And we now have 757 modules in the Perl 6 ecosystem (aka temporary CPAN for Perl 6 modules), with a few more added every week.

    The #perl6 IRC channel has been too busy for me to follow consistently. But if you have a question related to Perl 6 and you want a quick answer, the #perl6 channel is the place to be. You don’t even have to install an IRC client: you can also use a browser to chat, or just follow “live” what is being said.

    There are also quite a few useful bots on that channel: they e.g. take care of running a piece of Perl 6 code for you. Or find out at which commit the behaviour of a specific piece of code changed. These are very helpful for the developers of Perl 6, who usually also hang out on the #perl6-dev IRC channel. Which could be you! The past year, at least one contributor was added to the CREDITS every month!

    Will Be

    The coming year will see at least three Perl 6 books being published. First one will be Think Perl 6 – How To Think Like A Computer Scientist by Laurent Rosenfeld. It is an introduction to programming using Perl 6. But even for those of you with programming experience, it will be a good book to start learning Perl 6. And I can know. Because I’ve already read it :-)

    Second one will be Learning Perl 6 by veteran Perl developer and writer brian d foy. It will have the advantage of being written by a seasoned writer going through the newbie experience that most people will have when coming from Perl 5.

    The third one will be Perl 6 By Example by Moritz Lenz, which will, as the title already gives away, introduce Perl 6 topics by example.

    There’ll be at least two (larger) Perl Conferences apart from many smaller Perl workshops: the The Perl Conference NA on 18-23 June, and the The Perl Conference in Amsterdam on 9-11 August. Where you will meet all sorts of nice people!

    And for the rest? Expect a faster, leaner, Perl 6 and MoarVM compiler release on the 3rd Saturday every month. And an update of weekly events in the Perl 6 Weekly on every Monday evening/Tuesday morning (depending on where you live).


    Perl 6 Advent Calendar: Day 23 – Everything is either wrong or less than awesome

    Published by AlexDaniel on 2016-12-23T00:07:12

    Have you ever spent your precious time on submitting a bug report for some project, only to get a response that you’re an idiot and you should f⊄∞÷ off?

    Right! Well, perhaps consider spending your time on Perl 6 to see that not every free/open-source project is like this.

    In the Perl 6 community, there is a very interesting attitude towards bug reports. Is it something that was defined explicitly early on? Or did it just grow organically? This remains to be a Christmas mystery. But the thing is, if it wasn’t for that, I wouldn’t be willing to submit all the bugs that I submitted over the last year (more than 100). You made me like this.

    Every time someone submits a bug report, Perl 6 hackers always try to see if there is something that can done better. Yes, sometimes the bug report is invalid. But even if it is, is there any way to improve the situation? Perhaps a warning could be thrown? Well, if so, then we treat the behavior as LTA (Less Than Awesome), and therefore the bug report is actually valid! We just have to tweak it a little bit, meaning that the ticket will now be repurposed to improve or add the error message, not change the behavior of Perl 6.

    The concept of LTA behavior is probably one of the key things that keeps us from rejecting features that may seem to do too little good for the amount of effort required to implement them, but in the end become game changers. Another closely related concept that comes to mind is “Torment the implementors on behalf of the users”.

    OK, but what if this behavior is well-defined and is actually valid? In this case, it is still probably our fault. Why did the user get into this situation? Maybe the documentation is not good enough? Very often that is the issue, and we acknowledge that. So in a case of a problem with the documentation, we will usually ask you to submit a bug report for the documentation, but very often we will do it ourselves.

    Alright, but what if the documentation for this particular case is in place? Well, maybe the thing is not easily searchable? That could be the reason why the user didn’t find it in the first place. Or maybe we lack some links? Maybe the places that should link to this important bit of information are not doing so? In other words, perhaps there are still ways to improve the docs!

    But if not, then yes, we will have to write some tests for this particular case (if there are no tests yet) and reject the ticket. This happens sometimes.

    The last bit, even if obvious to some, is still worth mentioning. We do not mark tickets resolved without tests. One reason is that we want roast (which is a Perl 6 spec) to be as full as possible. The other reason is that we don’t want regressions to happen (thanks captain obvious!). As the first version of Perl 6 was released one year ago, we are no longer making any changes that would affect the behavior of your code. However, occasional regressions do happen, but we have found an easy way to deal with those!

    If you are not on #perl6 channel very often, you might not know that we have a couple of interesting bots. One of them is bisectable. In short, Bisectable performs a more user-friendly version of git bisect, but instead of building Rakudo on each commit, it has done it before you even asked it to! That is, it has over 5500 rakudo builds, one for every commit done in the last year and a half. This turns the time to run git bisect from minutes to about 10 seconds (Yes, 10 seconds is less than awesome! We are working on speeding it up!). And there are other bots that help us inspect the progress. The most recent one is Statisfiable, here is one of the graphs it can produce.

    So if you pop up on #perl6 with a problem that seems to be a regression, we will be able to find the cause in seconds. Fixing the issue will usually take a bit more than that though, but when the problem is significant, it will usually happen in a day or two. Sorry for breaking your code in attempts to make it faster, we will do better next time!

    But as you are reading this, perhaps you may be interested in seeing some bug reports? I thought that I’d go through the list of bugs of the last year to show how horribly broken things were, just to motivate the reader to go hunting for bugs. The bad news (oops, good news I mean), it seems that the number of “horrible” bugs is decreasing a bit too fast. Thanks to many Rakudo hackers, things are getting more stable at a very rapid pace.

    Anyway, there are still some interesting things I was able to dig up:

    That being said, my favorite bug of all times is RT #127473. Three symbols in the source code causing it to go into an infinite loop printing stuff about QAST nodes. That’s a rather unique issue, don’t you think?

    I hope this post gave you a little insight on how we approach bugs, especially if you are not hanging around on #perl6 very often. Is our approach less than awesome? Do you have some ideas for other bots that could help us work with bugs? Leave it in the comments, we would like to know!


    Perl 6 Advent Calendar: Day 22 – Generative Testing

    Published by SmokeMachine on 2016-12-22T00:00:47

    OK! So say you finished writing your code and it’s looking good. Let’s say it’s this incredible sum function:

    module Sum {
       sub sum($a, $bis export {
          $a + $b
       }
    }

    Great, isn’t it?! Let’s use it:

    use Sum;
    say sum 2, 3; # 5
    

    That worked! We summed the number 2 with the number 3 as you saw. If you carefully read the function you’ll see the variables $a and $b haven’t a type set. If you don’t type a variable it’s, by default, of type Any. 2 and 3 are Ints… Ints are Any. So that’s OK! But do you know what’s Any too? Str (just a example)!

    Let’s try using strings?

    use Sum;
    say sum "bla", "ble";
    

    We got a big error:

    Cannot convert string to number: base-10 number must begin with valid digits or '.' in 'bla' (indicated by ⏏)
      in sub sum at sum.p6 line 1
      in block  at sum.p6 line 7
    
    Actually thrown at:
      in sub sum at sum.p6 line 1
      in block  at sum.p6 line 7
    

    Looks like it does not accept Strs… It seems like Any may not be the best type to use in this case.

    Worrying about every possible input type for all our functions can prove to demand way too much work, as well as still being prone to human error. Thankfully there’s a module to help us with that! Test::Fuzz is a perl6 module that implements the “technique” of generative testing/fuzz testing.

    Generative testing or Fuzz Testing is a technique of generating random/extreme data and using this data to call the function being tested.

    Test::Fuzz gets the signature of your functions and decides what generators it should use to test it. After that it runs your functions giving it (100, by default) different arguments and testing if it will break.

    To test our function, all that’s required is:

    module Sum {
       use Test::Fuzz;
       sub sum($a, $bis export is fuzzed {
          $a + $b
       }
    }
    multi MAIN(:$fuzz!) {
       run-tests
    }
    

    And run:

    perl6 Sum.pm6 --fuzz
    

    This case will still show a lot of errors:

    Use of uninitialized value of type Thread in numeric context
      in sub sum at Sum.pm6 line 4
    Use of uninitialized value of type int in numeric context
      in sub sum at Sum.pm6 line 4
        ok 1 - sum(Thread, int)
    Use of uninitialized value of type X::IO::Symlink in numeric context
      in sub sum at Sum.pm6 line 4
        ok 2 - sum(X::IO::Symlink, -3222031972)
    Use of uninitialized value of type X::Attribute::Package in numeric context
      in sub sum at Sum.pm6 line 4
        ok 3 - sum(X::Attribute::Package, -9999999999)
    Use of uninitialized value of type Routine in numeric context
      in sub sum at Sum.pm6 line 4
        not ok 4 - sum(áéíóú, (Routine))
    ...
    

    What does that mean?

    That means we should use one of the big features of perl6: Gradual typing. $a and $b should have types.

    So, let’s modify the function and test again:

    module Sum {
       use Test::Fuzz;
       sub sum(Int $a, Int $bis export is fuzzed {
          $a + $b
       }
    }
    multi MAIN(:$fuzz!) {
       run-tests
    }
    
        ok 1 - sum(-2991774675, 0)
        ok 2 - sum(5471569889, 7905158424)
        ok 3 - sum(8930867907, 5132583935)
        ok 4 - sum(-6390728076, -1)
        ok 5 - sum(-3558165707, 4067089440)
        ok 6 - sum(-8930867907, -5471569889)
        ok 7 - sum(3090653502, -2099633631)
        ok 8 - sum(-2255887318, 1517560219)
        ok 9 - sum(-6085119010, -3942121686)
        ok 10 - sum(-7059342689, 8930867907)
        ok 11 - sum(-2244597851, -6390728076)
        ok 12 - sum(-5948408450, 2244597851)
        ok 13 - sum(0, -5062049498)
        ok 14 - sum(-7229942697, 3090653502)
        not ok 15 - sum((Int), 1)
    
        # Failed test 'sum((Int), 1)'
        # at site#sources/FB587F3186E6B6BDDB9F5C5F8E73C55195B73C86 (Test::Fuzz) line 62
        # Invocant requires an instance of type Int, but a type object was passed.  Did you forget a .new?
    ...
    

    A lot of OKs!  \o/

    But there’re still some errors… We can’t sum undefined values…

    We didn’t say the attributes should be defined (with :D). So Test::Fuzz generated every undefined sub-type of Int that it could find. It uses every generator of a sub-type of Int to generate values. It also works if you use a subset or even if you use a where in your signature. It’ll use a super-type generator and grep the valid values.

    So, let’s change it again!

    module Sum {
       use Test::Fuzz;
       sub sum(Int:D $a, Int:D $bis export is fuzzed {
          $a + $b
       }
    }
    multi MAIN(:$fuzz!) {
       run-tests
    }
        ok 1 - sum(6023702597, -8270141809)
        ok 2 - sum(-8270141809, -3762529280)
        ok 3 - sum(242796759, -7408209799)
        ok 4 - sum(-5813412117, -5280261945)
        ok 5 - sum(2623325683, 2015644992)
        ok 6 - sum(-696696815, -7039670011)
        ok 7 - sum(1, -4327620877)
        ok 8 - sum(-7712774875, 349132637)
        ok 9 - sum(3956553645, -7039670011)
        ok 10 - sum(-8554836757, 7039670011)
        ok 11 - sum(1170220615, -3)
        ok 12 - sum(-242796759, 2015644992)
        ok 13 - sum(-9558159978, -8442233570)
        ok 14 - sum(-3937367230, 349132637)
        ok 15 - sum(5813412117, 1170220615)
        ok 16 - sum(-7408209799, 6565554452)
        ok 17 - sum(2474679799, -3099404826)
        ok 18 - sum(-5813412117, 9524548586)
        ok 19 - sum(-6770230387, -7408209799)
        ok 20 - sum(-7712774875, -2015644992)
        ok 21 - sum(8442233570, -1)
        ok 22 - sum(-6565554452, 9999999999)
        ok 23 - sum(242796759, 5719635608)
        ok 24 - sum(-7712774875, 7039670011)
        ok 25 - sum(7408209799, -8235752818)
        ok 26 - sum(5719635608, -8518891049)
        ok 27 - sum(8518891049, -242796759)
        ok 28 - sum(-2474679799, 2299757592)
        ok 29 - sum(5356064609, 349132637)
        ok 30 - sum(-3491438968, 3438417115)
        ok 31 - sum(-2299757592, 7580671928)
        ok 32 - sum(-8104597621, -8158438801)
        ok 33 - sum(-2015644992, -3)
        ok 34 - sum(-6023702597, 8104597621)
        ok 35 - sum(2474679799, -2623325683)
        ok 36 - sum(8270141809, 7039670011)
        ok 37 - sum(-1534092807, -8518891049)
        ok 38 - sum(3551099668, 0)
        ok 39 - sum(7039670011, 4327620877)
        ok 40 - sum(9524548586, -8235752818)
        ok 41 - sum(6151880628, 3762529280)
        ok 42 - sum(-8518891049, 349132637)
        ok 43 - sum(7580671928, 9999999999)
        ok 44 - sum(-8235752818, -7645883481)
        ok 45 - sum(6460424228, 9999999999)
        ok 46 - sum(7039670011, -7788162753)
        ok 47 - sum(-9999999999, 5356064609)
        ok 48 - sum(8510706378, -2474679799)
        ok 49 - sum(242796759, -5813412117)
        ok 50 - sum(-3438417115, 9558159978)
        ok 51 - sum(8554836757, -7788162753)
        ok 52 - sum(-9999999999, 3956553645)
        ok 53 - sum(-6460424228, -8442233570)
        ok 54 - sum(7039670011, -7712774875)
        ok 55 - sum(-3956553645, 1577669672)
        ok 56 - sum(0, 9524548586)
        ok 57 - sum(242796759, -6151880628)
        ok 58 - sum(7580671928, 3937367230)
        ok 59 - sum(-8554836757, 7712774875)
        ok 60 - sum(9524548586, 2474679799)
        ok 61 - sum(-7712774875, 2450227203)
        ok 62 - sum(3, 1257247905)
        ok 63 - sum(8270141809, -2015644992)
        ok 64 - sum(242796759, -3937367230)
        ok 65 - sum(6770230387, -6023702597)
        ok 66 - sum(2623325683, -3937367230)
        ok 67 - sum(-5719635608, -7645883481)
        ok 68 - sum(1, 6770230387)
        ok 69 - sum(3937367230, 7712774875)
        ok 70 - sum(6565554452, -5813412117)
        ok 71 - sum(7039670011, -8104597621)
        ok 72 - sum(7645883481, 9558159978)
        ok 73 - sum(-6023702597, 6770230387)
        ok 74 - sum(-3956553645, -7788162753)
        ok 75 - sum(-7712774875, 8518891049)
        ok 76 - sum(-6770230387, 6565554452)
        ok 77 - sum(-8554836757, 5356064609)
        ok 78 - sum(6460424228, 8518891049)
        ok 79 - sum(-3438417115, -9999999999)
        ok 80 - sum(-1577669672, -1257247905)
        ok 81 - sum(-5813412117, -3099404826)
        ok 82 - sum(8158438801, -3551099668)
        ok 83 - sum(-8554836757, 1534092807)
        ok 84 - sum(6565554452, -5719635608)
        ok 85 - sum(-5813412117, -2623325683)
        ok 86 - sum(-8158438801, -3937367230)
        ok 87 - sum(5813412117, -46698532)
        ok 88 - sum(9524548586, -2474679799)
        ok 89 - sum(3762529280, -2474679799)
        ok 90 - sum(7788162753, 9558159978)
        ok 91 - sum(6770230387, -46698532)
        ok 92 - sum(1577669672, 6460424228)
        ok 93 - sum(4327620877, 3762529280)
        ok 94 - sum(-6023702597, -2299757592)
        ok 95 - sum(1257247905, -8518891049)
        ok 96 - sum(-8235752818, -6151880628)
        ok 97 - sum(1577669672, 7408209799)
        ok 98 - sum(349132637, 6770230387)
        ok 99 - sum(-7788162753, 46698532)
        ok 100 - sum(-7408209799, 0)
        1..100
    ok 1 - sum
    

    No errors!!!

    Currently Test::Fuzz only implement generators for Int and Str, but as I said, it will be used for its super and sub classes. If you want to have generators for your custom class, you just need to implement a “static” method called generate-samples that returns sample instances of your class, infinite number of instances if possible.

    Test::Fuzz is under development and isn’t in perl6 ecosystem yet. And we’re needing some help!

    EDITED: New now you can only call run-tests()


    Death by Perl6: Adding on to Channels and Supplies in Perl6

    Published by Tony O'Dell on 2016-12-21T16:11:13

    Channels and supplies are perl6's way of implementing the Oberserver pattern. There's some significant differences behind the scenes of the two but both can be used to implement a jQuery.on("event" like experience for the user. Not a jQuery fan? Don't you worry your pretty little head because this is perl6 and it's much more fun than whatever you thought.

    Why?

    Uhh, why do we want this?

    This adds some sugar to the basic reactive constructs and it makes the passing of messages a lot more friendly, readable, and manageable.

    What in Heck Does that Look Like?

    Let's have an example and then we'll dissect it.

    A Basic Example

    use Event::Emitter;  
    my Event::Emitter $e .= new;
    
    $e.on(/^^ .+ $$/, -> $data {
      # you can operate on $data here
      '  regex matches'.say;
    });
    
    $e.on({ True; }, -> $data {
      '  block matches'.say;
    });
    
    $e.on('event', -> $data {
      '  string matches'.say;
    });
    
    'event'.say;  
    $e.emit("event", { });
    
    'empty event'.say;  
    $e.emit("", { });
    
    'abc'.say;  
    $e.emit("abc", { });
    

    Output * this is the output for an emitter using Supply, more on this later

    event  
      regex matches
      block matches
      string matches
    empty event  
      block matches
    abc  
      regex matches
      block matches
    

    Okay, that looks like a lot. It is, and it's much nicer to use than a large given/when combination. It also reduces indenting, so you have that going for you, which is nice.

    Let's start with the simple .on blocks we have.

      $e.on(/^^ .+ $$/, -> $data { ...
    

    This is telling the emitter handler that whenever an event is received, run that regular expression against it and if it matches, execute the block (passed in as the second argument). As a note, and illustrated in the example above, the handler can match against a Callable, Str, or Regex. The Callable must return True or False to let the handler know whether or not to execute the block.

    If that seems pretty basic, it is. But little things like this add up over time and help keep things manageable. Prepare yourself for more convenience.

    The Sugar

    Do you want ants? This is how you get ants.

    So, now we're looking for more value in something like this. Here it is: you can inherit from the Event::Emitter::Role::Template (or roll your own) and then your classes will automatically inherit these on events.

    Example
    use Event::Emitter::Role::Template;
    
    class ZefClass does Event::Emitter::Role::Template {  
      submethod TWEAK {
        $!event-emitter.on("fresh", -> $data {
          'Aint that the freshness'.say;
        });
      }
    }
    

    Then, further along in your application, whenever an object wants ZefClass to react to the 'fresh' event, all it needs to do is:

    $zef-class-instance.emit('fresh');

    Pretty damn cool.

    Development time is reduced significantly for a few reasons right off the bat:

    1. Implementing Supplier (or Channel) methods, setup, and event handling becomes unnecessary
    2. Event naming or matching is handled so it's easy to debug
    3. Handling or adding new event handling functions during runtime (imagine a plugin that may want to add more events to handle - like an IRC client that implements a handler for channel parting messages)
    4. Messages can be multiplexed through one Channel or Supply rather easily
    5. Creates more readable code

    That last reason is a big one. Imagine going back into one of your modules 2 years from now and debugging an issue where a Supplier is given an event and some data and digging through that 600 lines of given/when.

    Worse, imagine debugging someone else's.

    A Quick Note on Channel vs Supply

    The Channel and Supply thing can take some getting used to for newcomers. The quick and dirty is that a Channel will distribute the event to only one listener (chosen by the scheduler) and order isn't guaranteed while a Supply will distribute to all listeners and the order of the messages are distributed in the order received. Because the Event::Emitter Channel based handler executes the methods registered with it directly, when it receives a message all of your methods are called with the data.

    So, you've seen the example above as a Supply based event handler, check it out as a Channel based and note the difference in .say and the instantiation of the event handler.

    use Event::Emitter;  
    my Event::Emitter $e .= new(:threaded); # !important - this signifies a Channel based E:E
    
    $e.on(/^^ .+ $$/, -> $data {
      # you can operate on $data here
      "  regex matches: $data".say;
    });
    
    $e.on({ True; }, -> $data {
      "  block matches: $data".say;
    });
    
    $e.on('event', -> $data {
      "  string matches: $data".say;
    });
    
    'event'.say;  
    $e.emit("event", "event");
    
    'empty event'.say;  
    $e.emit("", "empty event");
    
    'abc'.say;  
    $e.emit("abc", "abc");
    

    Output

    event  
    empty event  
    abc  
      regex matches: event
      block matches: event
      string matches: event
      block matches: empty event
      regex matches: abc
      block matches: abc
    

    Perl 6 Advent Calendar: Day 21 – Show me the data!

    Published by nadimkhemir on 2016-12-21T00:01:18

    Over the years, I have enjoyed using the different data dumpers that Perl5 offers. From the basic Data::Dumper to modules dumping in hexadecimal, JSON, with colors, handling closures, with a GUI, as graphs via dot and many other that fellow module developers have posted on CPAN (https://metacpan.org/search?q=data+dump&search_type=modules).

    I always find things easier to understand when I can see data and relationships. The funkiest display belonging to ddd (https://www.gnu.org/software/ddd/) that I happen to fire up now and then just for the fun (in the example showing C data but it works as well with the Perl debugger).

    ddd

    Many dumpers are geared towards data transformation and data transmission/storage. A few modules specialize in generating output for the end user to read; I have worked on system that generated hundreds of thousands lines of output and it is close to impossible to read dumps generated by, say, Data::Dumper.

    When I started using Perl6, I immediately felt the need to dump data structures (mainly because my noob code wasn’t doing what I expected it to do); This led me to port my Perl5 module (https://metacpan.org/pod/Data::TreeDumper  https://github.com/nkh/P6-Data-Dump-Tree) to Perl6. I am now also thinking about porting my HexDump module. I recommend warmly learning Perl6 by porting your modules (if you have any on CPAN), it’s fun, educative, useful for the Perl6 community, and your modules implement a need in a domain that you master leaving you time to concentrate on the Perl6.

    My Perl5 module was ripe for a re-write and I wanted to see if and how it would be better if written in Perl6, I was not disappointed.

    Perl6 is a big language, it takes time to get the pieces right, for a beginner it may seem daunting, even if one has years of experience, the secret is to take it easy, not give up and listen. Porting a module is the perfect exercise, you can take it easy because you have already done it before, you’re not going to give up because you know you can do it, and you have time to listen to people that have more experience (they also need your work), the Perl6 community has been examplary, helpful, patient, supportive and always present; if you haven visited #perl6 irc channel yet, now is a good time.

    .perl

    Every object in Perl6 has a ‘perl’ method, it can be used to dump the object and objects under it. The official documentation (https://docs.perl6.org/language/5to6-nutshell#Data%3A%3ADumper) provides a good example.

    .gist

    Every object also inherits a ‘gist’ method from Mu, the official documentation (https://docs.perl6.org/routine/gist#(Mu)_routine_gist) states: “Returns a string representation of the invocant, optimized for fast recognition by humans.”

    dd, the micro dumper

    It took me a while to discover this one, I saw that in a post on IRC. You know how it feel when you discover something simple after typing .perl and .gist a few hundred times, bahhh!

    https://docs.perl6.org/routine/dd

    The three dumpers above are built-in. They are also the fastest way to dump data but as much as their output is welcome, I know that it is possible to present data in a more legible way.

    Enter Data::Dump

    You can find the module on https://modules.perl6.org/ where all the Perl6 modules are. Perl6 modules link to repositories, Data::Dump source is on https://github.com/tony-o/perl6-data-dump.

    Data::dump introduces color, depth limitation, and type specific dumps. The code is a compact hundred lines that is quite easy to understand. This module was quite helpful for a few cases that I had. It also dumps all the methods associated with objects. Unfortunately, it did fail on a few types of objects. Give it a try.

    Data::Dump::Tree

    Emboldened by the Perl6 community, the fact that I really needed a Dumper for visualization, and the experience from my Perl5 module (mainly the things that I wanted to be done differently) I started working on the module. I had some difficulties at the beginning, I knew nothing about the details of Perl6 and even if there is a resemblance with Perl5, it’s another beast. But I love it, it’s advanced, clean, and well designed, I am grateful for all the efforts that where invested in Perl6.

    P6 vs P5 implementation

    It’s less than half the size and does as much, which makes it clearer (as much as my newbie code can be considered clean). The old code was one monolithic module with a few long functions, the new code has a better organisation and some functionality was split out to extra modules. It may sound like bit-rot (and it probably is a little) but writing the new code in Perl6 made the changes possible, multi dispatch, traits and other built-in mechanism greatly facilitate the re-factoring.

    What does it do that the other modules don’t?

    I’ll only talk about a few points here and refer you to the documentation for all the details (https://raw.githubusercontent.com/nkh/P6-Data-Dump-Tree/master/lib/Data/Dump/Tree.pod); also have a look at the examples in the distribution.

    The main goal for Data::Dump::Tree is readability, that is achieved with filter, type specific dumpers, colors, and dumper specialization via traits. In the examples directory, you can find JSON_parsed.pl which parses 20 lines of JSON by JSON::Tiny(https://github.com/moritz/json),. I’ll use it as an example below. The parsed data is dumped with .perl,  .gist , Data::Dump, and Data::Dump::Tree

    .perl output (500 lines, unusable for any average human, Gods can manage)screenshot_20161219_185724

    .gist (400 lines, quite readable, no color and long lines limit the readability a bit). Also note that it looks better here than on my terminal who has problems handling unicode properly.screenshot_20161219_190004

    Data::Dump (4200 lines!, removing the methods would probably make it usable)screenshot_20161219_190439

    The methods dump does not help.screenshot_20161219_190601

    Data::Dump::Tree (100 lines, and you are the judge for readability as I am biased). Of course, Data::Dump::Tree is designed for this specific usage, first it understands Match objects, second it can display only part of the string that are matched, which greatly reduces the noise.
    screenshot_20161219_190932

    Tweeking output

    The options are explained in the documentation but here is a little list
    – Defining type specific dumper
    screenshot_20161219_185409

    – filtering to remove data or add a representation for a data set;  below the data structure is dumped as it is and then filtered (a filter that shows what it is doing).

    As filtering happens on the “header” and “footer” is should be easy to make a HTML/DHTML plugin; Althoug bcat (https://rtomayko.github.io/bcat/), when using ASCII glyphs, works fine.

    screenshot_20161219_191525
    – set the display colors
    – change the glyphs
    – display address information or not
    – use subscripts for indexes
    – use ASCII, ANSI, or unicode for the glyphs

    Diffs

    I tried to implement a diff display with the Perl5 module but failed miserably as it needed architectural changes, The Perl6 version was much easier, in fact, it’s an add-on, a trait, that synchronizes two data dumps. This could be used in tests to show differences between expected and gotten data.

    screenshot_20161219_184701
    Of course we can eliminate the extra glyphs and the data that is equivalent (I also changed the glyph types to ASCII)screenshot_20161219_185035

    From here

    Above anything else, I hope many authors will start writing Perl6 modules. And I also hope to see other data dumping modules. As for Data::Dump::Tree, as it gathers more users, I hope to get requests for change, patches, and error reports.


    Pawel bbkr Pabian: Let the fake times roll...

    Published by Pawel bbkr Pabian on 2016-12-14T17:59:11

    In my $dayjob at GetResponse I have to deal constantly with time dependent features. For example this email marketing platform allows you to use something called 'Time Travel', which is sending messages to your contacts at desired hour in their time zones. So people around the world can get email at 8:00, when they start their work and chance for those messages message being read are highest. No matter where they live.

    But even such simple feature has more pitfalls that you can imagine. For example user has three contacts living in Europe/Warsaw, America/Phoenix and Australia/Sydney time zones.

    The obvious validation is to exclude nonexistent days, for example user cannot select 2017-02-29 because 2017 is not a leap year. But what if he wants to send message at 2017-03-26 02:30:00? For America/Phoenix this is piece of cake - just 7 hours difference from UTC (or unix time). For Australia/Sydney things are bit more complicated because they use daylight saving time and this is their summer so additional time shift must be calculated. And for Europe/Warsaw this will fail miserably because they are just changing to summer time from 01:59:00 to 03:00:00 and 02:30 simply does not exist therefore some fallback algorithm should be used.

    So for one date and time there are 3 different algorithms that have to be tested!
    Unfortunately most of the time dependent code does not expose any interface to pass current time to emulate all edge cases, methods usually call time( ) or DateTime.now( ) internally. So let's test such blackbox - it takes desired date, time and time zone and it returns how many seconds are left before message should be sent.

    package Timers;
    
    

    use DateTime;

    sub seconds_till_send {

    my $when = DateTime->new( @_ )->epoch( );
    my $now = time( );

    return ( $when > $now ) ? $when - $now : 0;
    }

    Output of this method changes in time. To test it in consistent manner we must override system time( ) call:

    #!/usr/bin/env perl
    
    

    use strict;
    use warnings;

    BEGIN {
    *CORE::GLOBAL::time = sub () { $::time_mock // CORE::time };
    }

    use Timers;
    use Test::More;

    # 2017-03-22 00:00:00 UTC
    $::time_mock = 1490140800;

    is Timers::seconds_till_send(
    'year' => 2017, 'month' => 3, 'day' => 26,
    'hour' => 2, 'minute' =>30,
    'time_zone' => 'America/Phoenix'
    ), 379800, 'America/Phoenix time zone';

    Works like a charm! We have consistent test that pretends our program is ran at 2017-03-22 00:00:00 UTC and that means there are 4 days, 9 hours and 30 minutes till 2017-03-26 02:30:00 in America Phoenix.

    We can also test DST case in Australia.

    # 2017-03-25 16:00:00 UTC
    $::time_mock = 1490457600;
    
    

    is Timers::seconds_till_send(
    'year' => 2017, 'month' => 3, 'day' => 26,
    'hour' => 2, 'minute' =>30,
    'time_zone' => 'Australia/Sydney'
    ), 0, 'America/Phoenix time zone';

    Because during DST Sydney has +11 hours from UTC instead of 10 that means when we run our program at 2017-03-25 16:00:00 UTC requested hour already passed there and message should be sent instantly. Great!

    But what about nonexistent hour in Europe/Warsaw? We need to fix this method to return some useful values in DWIM-ness spirit instead of crashing. And I haven't told you whole, scarry truth yet, because we have to solve two issues at once here. First is nonexistent hour - in this case we want to calculate seconds to nearest possible hour after requested one - so 03:00 Europe/Warsaw should be used if 02:30 Europe/Warsaw does not exist. Second is ambiguous hour that happens when clocks are moved backwards and for example 2017-10-29 02:30 Europe/Warsaw occurs twice during this day - in this case first hour occurrence should be taken - so if 02:30 Europe/Warsaw is both at 00:30 UTC and 01:30 UTC seconds are calculated to the former one. Yuck...

    For simplicity let's assume user cannot schedule message more than one year ahead, so only one time change related to DST will take place. With that assumption fix may look like this:

    sub seconds_till_send {
        my %params = @_;
        my $when;
    
    

    # expect ambiguous hour during summer to winter time change
    if (DateTime->now( 'time_zone' => $params{'time_zone'} )->is_dst) {

    # attempt to create ambiguous hour is safe
    # and will always point to latest hour
    $when = DateTime->new( %params );

    # was the same hour one hour ago?
    my $tmp = $when->clone;
    $tmp->subtract( 'hours' => 1 );

    # if so, correct to earliest hour
    if ($when->hms eq $tmp->hms) {
    $when = $when->epoch - 3600;
    }
    else {
    $when = $when->epoch;
    }
    }

    # expect nonexistent hour during winter to summer time change
    else {

    do {

    # attempt to create nonexistent hour will die
    $when = eval { DateTime->new( %params )->epoch( ) };

    # try next minute maybe...
    if ( ++$params{'minute'} > 59 ) {
    $params{'minute'} = 0;
    $params{'hour'}++;
    }

    } until defined $when;

    }

    my $now = time( );

    return ( $when > $now ) ? $when - $now : 0;
    }

    If your eyes are bleeding here is TL;DR. First we have to determine which case we may encounter by checking if there is currently DST in requested time zone or not. For nonexistent hour we try to brute force it into next possible time by adding one minute periods and adjusting hours when minutes overflow. There is no need to adjust days because DST never happens on date change. For ambiguous hour we check if by subtracting one hour we get the same hour (yep). If so we have to correct unix timestamp to get earliest one.

    But what about our tests? Can we still write it in deterministic and reproducible way? Luckily it occurs that DateTime->now( ) uses time( ) internally so no additional hacks are needed.

    # 2017-03-26 00:00:00 UTC
    $::time_mock = 1490486400;
    
    

    is Timers::seconds_till_send(
    'year' => 2017, 'month' => 3, 'day' => 26,
    'hour' => 2, 'minute' => 30,
    'time_zone' => 'Europe/Warsaw'
    ), 3600, 'Europe/Warsaw time zone nonexistent hour';

    Which is expected result, 02:30 is not available in Europe/Warsaw so 03:00 is taken that is already in DST season and 2 hours ahead of UTC.

    Now let's solve leap seconds issue where because of Moon slowing down Earth and causing it to run on irregular orbit you may encounter 23:59:60 hour every few years. OK, OK, I'm just kidding :) However in good tests you should also take leap seconds into account if needed!

    I hope you learned from this post how to fake time in tests to cover weird edge cases.
    Before you leave I have 3 more things to share:

    1. Dave Rolsky, maintainer of DateTime module does tremendous job. This module is a life saver. Thanks!
    2. Overwrite CORE::time before loading any module that calls time( ). If you do it this way
      use DateTime;
      
      

      BEGIN {
      *CORE::GLOBAL::time = sub () { $::time_mock // CORE::time };
      }

      $::time_mock = 123;
      say DateTime->now( );

      it won't have any effect due to sub folding.


    3. Remember to include empty time signature

      BEGIN {
          # no empty signature
          *CORE::GLOBAL::time = sub { $::time_mock // CORE::time }; 
      }
      
      

      $::time_mock = 123;

      # parsed as time( + 10 )
      # $y = 123, not what you expected !
      my $y = time + 10;

      because you cannot guarantee someone always used parenthesis when calling time( ).



    6guts: Complex cocktail causes cunning crash

    Published by jnthnwrthngtn on 2016-12-09T00:02:33

    I did a number of things for Perl 6 yesterday. It was not, however, hard to decide which of them to write up for the blog. So, let’s dig in.

    Horrible hiding heisenbugs

    It all started when I was looking into this RT, reporting a segfault. It was filed a while ago, and I could not reproduce it. So, add a test and case closed? Well, not so fast. As I discussed last week, garbage collection happens when a thread fills up its nursery. Plenty of bugs only show up when the timing is just right (or is that just wrong?) What can influence when GC runs? How many allocations we’ve done. And what can influence that? Pretty much any change to the program being run, the compiler, or the environment. The two that most often have people tearing their hair out are:

    While GC-related issues are not the only cause of SEGVs, in such a simple piece of code using common features it’s far and away the most likely cause. So, to give myself more confidence that the bug truly was gone, I adjusted the nursery size to be just 32KB instead of 4MB, which causes GC to run much more often. This, of course, is a huge slowdown, but it’s good for squeezing out bugs.

    And…no bug! So, in goes the test. Simples!

    In even better news, it was lunch time. Well, actually, that was only sort of good news. A few days ago I cooked a rather nice chicken and berry pulao. It came out pretty good, but cooking 6 portions worth of it when I’m home alone for the week wasn’t so smart. I don’t want to see chicken pulao for a couple of months now. Anyway, while I was devouring some of my pulao mountain, I set off a spectest run on the 32KB nursery stress build of MoarVM, just to see if it showed up anything.

    Putrid pointers

    A couple of failures did, in fact, show up, one of them in constant.t. This rang a bell. I was sure somebody had a in the last couple of weeks reported a crash in that test file, which had then vanished. I checked in with the person who I vaguely recalled mentioning it and…sure enough, it was that very test file. In their normal test runs, the bug had long since vanished. I figured, having now got a reproduction of it, I should probably hunt it down right away. Otherwise, we’d probably end up playing “where’s Wally” with it for another month or ten.

    So, how did the failure look?

    $ ./perl6-m -Ilib t/spec/S04-declarations/constant.rakudo.moar 
    Segmentation fault (core dumped)
    

    It actually segfaulted while compiling the test file. Sad! So, where?

    $ ./perl6-gdb-m -Ilib t/spec/S04-declarations/constant.rakudo.moar 
    [boring output omitted]
    Program received signal SIGSEGV, Segmentation fault.
    0x0000000000000000 in ?? ()
    

    That looks…ungood. That final line is meant to be a code location, which means something decided to tell the CPU to go execute code living at the NULL address. At this point, things could go two ways: the JIT spat out something terrible, or a function pointer somewhere was NULL. But which?

    (gdb) where
    #0  0x0000000000000000 in ?? ()
    #1  0x00007ffff78cacbc in MVM_coerce_smart_stringify (tc=0x6037c0, obj=0x605c10, res_reg=0x56624d8)
        at src/core/coerce.c:214
    #2  0x00007ffff789dff4 in MVM_interp_run (tc=tc@entry=0x6037c0, initial_invoke=0x60ea80, 
        invoke_data=0x56624d8) at src/core/interp.c:827
    #3  0x00007ffff7978b21 in MVM_vm_run_file (instance=0x603010, 
        filename=0x7fffffffe09f "/home/jnthn/dev/rakudo/perl6.moarvm") at src/moar.c:309
    #4  0x000000000040108b in main (argc=9, argv=0x7fffffffdbc8) at src/main.c:192
    

    Phew, it looks like the second case, given there’s no JIT entry stub on the stack. So, we followed a NULL function pointer. Really?

    (gdb) frame 1
    #1  0x00007ffff78cacbc in MVM_coerce_smart_stringify (tc=0x6037c0, obj=0x605c10, res_reg=0x56624d8)
    at src/core/coerce.c:214
    214     ss = REPR(obj)->get_storage_spec(tc, STABLE(obj));
    

    Yes, really. Presumably, that get_storage_spec is bogus. (I did a p to confirm it.) So, how is obj looking?

    (gdb) p *obj
    $1 = {header = {sc_forward_u = {forwarder = 0x48000000000001, sc = {sc_idx = 1, idx = 4718592}, 
      st = 0x48000000000001}, owner = 6349760, flags = 0, size = 0}, st = 0x6d06c0}
    

    Criminally corrupt; let me count the ways. For one, 6349760 looks like a very high thread ID for a program that’s only running a single thread (they are handed out sequentially). For two, 0 is not a valid object size. And for three, idx is just a nuts value too (even Rakudo’s CORE.setting isn’t made up of 4 million objects). So, where does this object live? Well, let’s try out last week’s handy object locator to figure out:

    (gdb) p MVM_gc_debug_find_region(tc, obj)
    In tospace of thread 1
    

    Well. Hmpfh. That’s actually an OK place for an object to be. Of course, the GC spaces swap often enough at this nursery size that a pointer could fail to be updated, point into fromspace after one GC run, not be used until a later GC run, and then come to point into some random bit of tospace again. How to test this hypothesis? Well, instead of 32768 bytes of nursery, what if I make it…well, 40000 maybe?

    Here we go again:

    $ ./perl6-gdb-m -Ilib t/spec/S04-declarations/constant.rakudo.moar 
    [trust me, this omitted stuff is boring]
    Program received signal SIGSEGV, Segmentation fault.
    0x00007ffff78b00db in MVM_interp_run (tc=tc@entry=0x6037c0, initial_invoke=0x0, invoke_data=0x563a450)
        at src/core/interp.c:2855
    2855                    if (obj && IS_CONCRETE(obj) && STABLE(obj)->container_spec)
    

    Aha! A crash…somewhere else. But where is obj this time?

    (gdb) p MVM_gc_debug_find_region(tc, obj)
    In fromspace of thread 1
    

    Hypothesis confirmed.

    Dump diving

    So…what now? Well, just turn on that wonder MVM_GC_DEBUG flag and the bug will make itself clear, of course. Alas, no. It didn’t trip a single one of the sanity checks added by enabling thee flag. So, what next?

    The where in gdb tells us where in the C code we are. But what high level language code was MoarVM actually running at the time? Let’s dump the VM frame stack and find out:

    (gdb) p MVM_dump_backtrace(tc)
       at <unknown>:1  (./blib/Perl6/Grammar.moarvm:initializer:sym<=>)
     from gen/moar/stage2/QRegex.nqp:1378  (/home/jnthn/dev/MoarVM/install/share/nqp/lib/QRegex.moarvm:!protoregex)
     from <unknown>:1  (./blib/Perl6/Grammar.moarvm:initializer)
     from src/Perl6/Grammar.nqp:3140  (./blib/Perl6/Grammar.moarvm:type_declarator:sym<constant>)
     from gen/moar/stage2/QRegex.nqp:1378  (/home/jnthn/dev/MoarVM/install/share/nqp/lib/QRegex.moarvm:!protoregex)
     from <unknown>:1  (./blib/Perl6/Grammar.moarvm:type_declarator)
     from <unknown>:1  (./blib/Perl6/Grammar.moarvm:term:sym<type_declarator>)
     from gen/moar/stage2/QRegex.nqp:1378  (/home/jnthn/dev/MoarVM/install/share/nqp/lib/QRegex.moarvm:!protoregex)
     from src/Perl6/Grammar.nqp:3825  (./blib/Perl6/Grammar.moarvm:termish)
     from gen/moar/stage2/NQPHLL.nqp:886  (/home/jnthn/dev/MoarVM/install/share/nqp/lib/NQPHLL.moarvm:EXPR)
    from src/Perl6/Grammar.nqp:3871  (./blib/Perl6/Grammar.moarvm:EXPR)
    ...
    

    I’ve snipped out a good chunk of a fairly long stack trace. But look! We were parsing and compiling a constant at the time of the crash. That’s somewhat interesting, and explains why constant.t was a likely test file to show this bug up. But MoarVM has little idea about parsing or Perl 6’s idea of constants. Rather, something on that codepath of the compiler must run into a bug of sorts.

    Looking at the location in interp.c the op being interpreted at the time was decont, which takes a value out of a Scalar container, if it happens to be in one. Combined with knowing what code we were in, I can invoke moar --dump blib/Perl6/Grammar.moarvm, and then locate the disassembly of initializer:sym<=>.

    There were a few uses of the decont op in that function. All of them seemed to be on things looked up lexically or dynamically. So, I instrumented those ops with a fromspace check. Re-compiled, and…

    (gdb) break MVM_panic
    Breakpoint 1 at 0x7ffff78a19a0: file src/core/exceptions.c, line 779.
    (gdb) r
    Starting program: /home/jnthn/dev/MoarVM/install/bin/moar --execname=./perl6-gdb-m --libpath=/home/jnthn/dev/MoarVM/install/share/nqp/lib --libpath=/home/jnthn/dev/MoarVM/install/share/nqp/lib --libpath=. /home/jnthn/dev/rakudo/perl6.moarvm --nqp-lib=blib -Ilib t/spec/S04-declarations/constant.rakudo.moar
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    
    Breakpoint 1, MVM_panic (exitCode=1, 
        messageFormat=0x7ffff799bc58 "Collectable %p in fromspace accessed") at src/core/exceptions.c:779
    779 void MVM_panic(MVMint32 exitCode, const char *messageFormat, ...) {
    (gdb) where
    #0  MVM_panic (exitCode=1, messageFormat=0x7ffff799bc58 "Collectable %p in fromspace accessed")
        at src/core/exceptions.c:779
    #1  0x00007ffff78ba657 in MVM_interp_run (tc=0x1, tc@entry=0x6037c0, initial_invoke=0x0, 
        invoke_data=0x604b80) at src/core/interp.c:374
    #2  0x00007ffff7979071 in MVM_vm_run_file (instance=0x603010, 
        filename=0x7fffffffe09f "/home/jnthn/dev/rakudo/perl6.moarvm") at src/moar.c:309
    #3  0x000000000040108b in main (argc=9, argv=0x7fffffffdbc8) at src/main.c:192
    

    And what’s in interp.c around that line? The getdynlex op. That’s the one that is used to lookup things like $*FOO in Perl 6. So, a dynamic lexical lookup seemed to be handing back an outdated object. How could that happen?

    Interesting idea is insufficient

    My next idea was to see if I could catch the moment that something bad was put into the lexical. I’d already instrumented the obvious places with no luck. But…what if I could intercept every single VM register access and see if an object from fromspace was read? Hmmm… It turned out that I could make that happen with a sufficiently cunning patch. I made it opt-in rather than the default for MVM_GC_DEBUG because it’s quite a slow-down. I’m sure that this will come in really useful for finding some GC bug some day. But for this bug? It was no direct help.

    It was of some indirect help, however. It suggested strongly that at the time the lexical was set (actually, it turned out to be $*LEFTSIGIL), everything was valid. Somewhere between then and the lookup of it using the getdynlex op, things went bad.

    Cache corruption

    So what does getdynlex actually do? It checks if the current frame declares a lexical of the specified name. If so, it returns the value. If not, it looks in the caller for the value. If that fails, it goes on to the caller’s caller, until it runs out of stack to search and then gives up.

    If that’s what it really did, then this bug would never have happened. But no, people actually want Perl 6 to run fast and stuff, so we can’t just implement the simplest possible thing and go chill. Instead, there’s a caching mechanism. And, as well all know, the two hardest problems in computer science are cache invalidation and cache invalidation.

    The caching is relatively simple: each frame has slots for sticking a name, register pointer, and type in it.

    MVMString   *dynlex_cache_name;
    MVMRegister *dynlex_cache_reg;
    MVMuint16    dynlex_cache_type;
    

    When getdynlex finds something the slow way, it then looks down the stack again and finds a frame with an empty dynlex_cache_name. It then sticks the name of dynamic lexical into the name slot, a pointer to the MoarVM lexical into the reg slot, and what type of lexical it was (native int, native num, object, etc.) into the type slot. The most interesting of these is the reg slot. The MVMRegister type is actually a union of different types that we may store in a register or lexical. We re-use the union for registers that live while the frame is on the callstack and lexicals that may need to live longer thanks to closures. So, each frame as two arrays of these:

    MVMRegister *env;   /* The lexical environment */
    MVMRegister *work;  /* Working space that dies with the frame */
    

    And so the dynlex_cache_reg ends up pointing to env somewhere in the frame that we found the lexical in.

    So, the big question: was the caching to blame? I shoved in a way to disable it and…the bug vanished.

    Note by this point we’re up to two pieces that contribute to the bug: the GC and the dynamic lexical cache. The thing is, the dynamic lexical cache is used very heavily. My gut feeling told me there must be at least one more factor at play here.

    Suspicious specialization

    So, what could the other factor be? I re-enabled the cache, verified the crash came back, and then stuck MVM_SPESH_DISABLE=1 into the environment. And…no bug. So, it appeared that dynamic optimization was somehow involved too. That’s the magic that looks at what types actually show up at runtime, and compiles specialized versions of the code that fast-paths a bunch of operations based on that (this specialization being where the name “spesh” comes from). Unfortunately, MVM_SPESH_DISABLE is a rather blunt instrument. It disables a huge range of things, leaving a massive surface area to consider for the bug. Thankfully, there are some alternative environment variables that just turn off parts of spesh.

    First, I tried MVM_JIT_DISABLE=1, which results in spesh interpreting the specialized version of the code rather than turning it into machine code to remove the interpreter overhead. The bug remained.

    Next, I tried MVM_SPESH_OSR_DISABLE, which disables On Stack Replacement. This is a somewhat fiddly optimization that detects hot loops as they are being interpreted, pauses execution, produces an optimized version of the code, and then recalculates the program counter so it points to the appropriate point in the optimize code and continues execution. Basically, the interpreter gets the code it’s executing replaced under it – perhaps with machine code, which the interpreter is instructed to jump into immediately. Since this also fiddles with frames “in flight”, it seemed like a good candidate. But…nope. Bug remained.

    Finally, I tried MVM_SPESH_INLINE_DISABLE, which disables inlining. That’s where we spot a call to a relatively small subroutine or method, and just replace the call with the code of the sub or method itself, saving the cost of setting up callframes. And…bug vanished!

    So, inlining was apparently a factor too. The trouble is, that also didn’t seem to add up to an obvious bug. Consider:

    sub foo($a) {
        bar($a);
    }
    sub bar($b) {
        my $c = $b + 6;
        $c * 6
    }
    

    Imagine that bar was to be inlined into foo. Normally they’d have lexical slots in ->env as follows:

    A:  | $_ | $! | $/ | $a |
    B:  | $_ | $! | $/ | $b | $c |
    

    The environment for the frame inline(A, B) would look like:

    inline(A, B):  | $_ | $! | $/ | $a | $_ | $! | $/ | $b | $c |
                    \---- from A ----/  \------- from B -------/
    

    Now, it’s easy to imagine various bugs that could arise in the initial lookup of a dynamic lexical in such a frame. Indeed, the dynamic lexical lookup code actually has two bunches of code that deal with such frames, one in the case the specialized code is being interpreted and one in the case where it has been JIT compiled. But by the time we are hitting the cache, there’s nothing smart going on at all: it’s just a cheap pointer deference.

    Dastardly deoptimization

    So, it seems we need a fourth ingredient to explain the bug. By now, I had a pretty good idea what it was. MoarVM doesn’t just to optimizations based on properties it can prove will always hold. It can also do speculative optimization based on properties that it expects will probably hold up. For example, suppose we have:

    sub foo($a, $b) {
        $b.some-huge-complex-call($a);
        return $a.Str;
    }
    

    Imagine we’re generating a specialization of this routine for the case $a is an object of type Product. The Str method is tiny, so we go ahead and inline it. However, some-huge-complex-call takes all kinds of code paths. We can’t be sure, from our analysis, that at some point it won’t mix in to the object in $a. What if it mixes in a role that has an alternate Str method? Our inlining would break stuff! We’d end up calling the Product.Str method, not the one from the mixin.

    One reaction is to say “well, we’ll just not ever optimize stuff unless we can be REALLY sure”, which is either hugely limiting or relies on much more costly analyses. The other path, which MoarVM does, is to say “eh, let’s just assume mixins won’t happen, and if they do, we’ll fix things then!” The process of fixing things up is called deoptimization. We walk the call stack, rewriting return addresses to point to the original interpreted code instead of the optimized version of the code.

    But what, you might wonder, do we do if there’s a frame on the stack that is actually the result of an inlining operation? What if we’re in the code that resulted from inline(A,B), in the bit that corresponds to the code of B? Well, we have to perform – you guessed it – uninlining! The composite call frame has to be dissected, and the call stack rewritten to look like it would have if we’d been running the original interpreted code. To do this, we’d create a call frame for B, complete with space for its lexicals, and copy the lexicals from inline(A,B) that belong to B into that new buffer.

    The code that does this is one of the very few parts of MoarVM that frightens me.

    For good reason, it turns out. This deoptimization, together with uninlining, was the final ingredient needed for the bug. Here’s what happened:

    1. The method EXPR in Perl6::Grammar was inlined into one of its callers. This EXPR method declares a $*LEFTSIGIL variable.
    2. While parsing the constant, the $*LEFTSIGIL is assigned to the sigil of the constant being declared, if it has one (so, in constant $a = 42 it would be set to $).
    3. Something does a lookup of $*LEFTSIGIL. It is located and cached. The cache entry points into a region of the ->env of the frame that inlined, and thus incorporated, the lexical environment of EXPR.
    4. At some point, a mixin happens, causing a deoptimization of the call stack. The frame that inlined EXPR gets pulled apart. A new EXPR frame comes to exist, with the lexicals that used to live in the composite frame copied into them. Execution continues.
    5. A GC happens. The object containing the $ substring moves. The new EXPR frame’s lexical environment is updated.
    6. Another lookup of $*LEFTSIGIL happens. It hits the cache. The cache, however, still points to the place the lexical used to live in the composite frame. This memory has not been freed, because the first part of it is still being used. However, the GC no longer cares about its contents because that content is unreachable. Therefore, it contains an outdated pointer, thus leading to accessing memory that’s being used for something else entirely by that point, leading to the eventual segmentation fault.

    The most natural fix was to invalidate the cache during deoptimization.

    Lessons learned

    The bug I wrote up last week was thanks to a comparatively simple oversight made within the scope of a few lines of a single C function. While this one could be fixed with a small amount of code added in a single file, the segfault arose from the interaction of four distinct features existing in MoarVM:

    Even when a segfault was not produced, thanks to “lucky” GC timing, the bug would lead to reading of stale data. It just so turned out that the data wasn’t ever stale enough in reality to break things on this particular code path.

    All of garbage collection, inlining, and deoptimization are fairly complicated. By contrast, the dynamic lexical lookup cache is fairly easy. Interestingly, it was the addition of this easy feature that introduced the bug – not because the code that was added was wrong, but rather because it did something that some other far flung piece of code – the deoptimizer – had quietly relied on not happening.

    So, what might be learned for the future?

    The most immediate practical learning is that taking interior pointers into mutable data structures is risky. In this case, that data structure was a composite lexical environment, that later got taken apart. Conceptually, the environment was resized and the interior pointer was off the end of the new size. This suggests either providing a safe way to acquire such a reference, or an alternative design for the dynamic lexical cache to avoid needing to do so.

    Looking at the bigger picture, this is all about managing complexity. Folks who work with me tend to observe I worry a good bit about loose coupling, to the degree that I’m much more hesitant than the typical developer when it comes to code re-use. Acutely aware that re-use means use, and use means dependency, and dependency means coupling, I tend to want things to prove they really are the same thing rather than just looking like they might be the same thing. MoarVM reflects this in various ways: to the degree I can, I try to organize it as a collection of parts that either keep themselves very much to themselves, or that collaborate over a small number of very stable data structures. One of the reasons Git works architecturally is because while all the components of it are messing with the same data structure, it’s a very stable and well-understood data structure.

    In this bug, MVMFrame is the data structure in question. A whole load of components know about it and work with it because – so the theory went – it’s one of the key stable data structures of the VM. Contrast it with the design of things like the Unicode normalizer or the fixed size allocator, which nothing ever pokes into directly. These are likely to want to evolve over time to choose smarter data structures, or to get extra bits of state to cope with Unicode’s latest and greatest emoji boundary specification. Therefore, all work with them is hidden nicely behind an API.

    In reality, MVMFrame has grown to contain quite a fair few things as MoarVM has evolved. At the same time, its treated as a known quantity by lots of parts of the codebase. This is only sustainable if every addition to MVMFrame is followed by considering how every other part of the VM that interacts with it will be affected by the change, and making compensating changes to those components. In this case, the addition of the dynamic lexical cache into the frame data structure was not accompanied by sufficient analysis of which other parts of the VM may need compensating changes.

    The bug I wrote up last week isn’t really the kind that causes an architect a headache. It was a localized coding slip-up that could happen to anyone on a bad day. It’s a pity we didn’t catch it in code review, but code reviewers are also human. This bug, by contrast, arose as a result of the complexity of the VM – or, more to the point, insufficient management of that complexity. And no, I’m not beating myself up over this. But, as MoarVM architect, this is exactly the type of bug that catches my eye, and causes me to question assumptions. In the immediate, it tells me what kinds of patches I should be reviewing really carefully. In the longer run, the nature of the MVMFrame data structure and its level of isolation from the rest of the codebase deserves some questioning.


    6guts: Taking a couple of steps backwards to fix a GC bug

    Published by jnthnwrthngtn on 2016-11-30T23:16:42

    When I popped up with a post here on Perl 6 OO a few days ago, somebody noted in the comments that they missed my write-ups of my bug hunting and fixing work in Rakudo and MoarVM. The good news is that the absence of posts doesn’t mean an absence of progress; I’ve fixed dozens of things over the last months. It was rather something between writers block and simply not having the energy, after a day of fixing things, to write about it too. Anyway, it seems I’ve got at least some of my desire to write back, so here goes. (Oh, and I’ll try and find a moment in the coming days to reply to the other comments people wrote on my OO post too.)

    Understanding a cryptic error

    There are a number of ways MoarVM can come tumbling down when memory gets corrupted. Some cases show up as segmentation faults. In other cases, the VM comes across something that simply does make any kind of sense and can infer that memory has become corrupted. Two panics commonly associated with this are “zeroed target thread ID in work pass” and “invalid thread ID XXX in GC work pass”, where XXX tends to be a sizable integer. At the start of a garbage collection – where we free up memory associated with dead objects – we do something like this:

    1. Go through all the threads that have been started, and signal those that are not blocked (e.g. waiting for I/O, a lock acquisition, or for native code to finish) to come and participate in the garbage collection run.
    2. Assign each non-blocked thread itself to work on.
    3. Assign each blocked thread’s work to a non-blocked thread.

    So, every thread – blocked or not – ends up assigned to a running thread to take care of its collection work. It’s the participation of multiple threads that makes the MoarVM GC parallel (which is a different thing to having a concurrent GC; MoarVM’s GC can barely claim to be that).

    The next important thing to know is that the every object, at creation, is marked with the ID of the thread that allocated it. This means that, as we perform GC, we know whether the object under consideration “belongs” to the current thread we’re doing GC work for, or some other one. In the case that the ID in the object header doesn’t match up with the thread ID we’re doing GC work for, then we stick it into a list of work to pass off to the thread that is responsible. To avoid synchronization overhead, we pass then off in batches (so there’s only synchronization overhead per batch). This is far from the only way to do parallel GC (other schemes include racing to write forwarding pointers), but it keeps the communication between participating threads down and leaves little surface area for data races in the GC.

    The funny thing is that if none of that really made any sense to you, it doesn’t actually matter at all, because I only told you about it all so you’d have a clue what the “work pass” in the error message means – and even that doesn’t matter much for understanding the bug I’ll eventually get around to discussing. Anyway, TL;DR version (except you did just read it all, hah!) is that if the owner ID in an object header is either zero or an out-of-range thread ID, then we can be pretty sure there’s memory corruption afoot. The pointer under consideration is either to zeroed memory, or to somewhere in memory that does not correspond to an object header.

    So, let’s debug the panic!

    Getting the panic is, perhaps, marginally better than a segmentation fault. I mean, sure, I’m a bit less embarrassed when Moar panics than SEGVs, and perhaps it’s mildly less terrifying for users too. But at the end of the day, it’s not much better from a debugging perspective. At the point we spot the memory corruption, we have…a pointer. That points somewhere wrong. And, this being the GC, it just came off the worklist, which is full of a ton of pointers.

    If only we could know where the pointer came from, I hear you think. Well, it turns out we can: we just need to detect the problem some steps back, where the pointer is added to the worklist. In src/gc/debug.h there’s this:

    #define MVM_GC_DEBUG 0
    

    Flip that to a 1, recompile, and magic happens. Here’s a rather cut down snippet from in worklist.h:

    #if MVM_GC_DEBUG
    #define MVM_gc_worklist_add(tc, worklist, item) \
        do { \
            MVMCollectable **item_to_add = (MVMCollectable **)(item); \
            if (*item_to_add) { \
                if ((*item_to_add)->owner == 0) \
                    MVM_panic(1, "Zeroed owner in item added to GC worklist"); \
                    /* Various other checks here.... */ 
            } \
            if (worklist->items == worklist->alloc) \
                MVM_gc_worklist_add_slow(tc, worklist, item_to_add); \
            else \
                worklist->list[worklist->items++] = item_to_add; \
        } while (0)
    #else
    #define MVM_gc_worklist_add(tc, worklist, item) \
        do { \
            MVMCollectable **item_to_add = (MVMCollectable **)(item); \
            if (worklist->items == worklist->alloc) \
                MVM_gc_worklist_add_slow(tc, worklist, item_to_add); \
            else \
                worklist->list[worklist->items++] = item_to_add; \
        } while (0)
    #endif
    

    So, in the debug version of the macro, we do some extra checks – including the one to detect a zeroed owner. This means that when MoarVM panics, the GC code that is placing the bad pointer into the list is on the stack. Then it’s a case of using GDB (or your favorite debugger), sticking a breakpoint on MVM_panic (spelled break MVM_panic in GDB), running the code that explodes, and then typing where. In this case, I was pointed at the last line of this bit of code from roots.c:

    void MVM_gc_root_add_frame_roots_to_worklist(MVMThreadContext *tc, MVMGCWorklist *worklist,
                                                 MVMFrame *cur_frame) {
        /* Add caller to worklist if it's heap-allocated. */
        if (cur_frame->caller && !MVM_FRAME_IS_ON_CALLSTACK(tc, cur_frame->caller))
            MVM_gc_worklist_add(tc, worklist, &cur_frame->caller);
    
        /* Add outer, code_ref and static info to work list. */
        MVM_gc_worklist_add(tc, worklist, &cur_frame->outer);
    

    So, this tells me that the bad pointer is to an outer. The outer pointer of a call frame points to the enclosing lexical scope, which is how closures work. This provides a bit of inspiration for bug hunting; for example, it would now make sense to consider codepaths that assign outer to see if they could ever fail to keep a pointer up to date. The trouble is, for such an incredibly common language feature to be broken in that way, we’d be seeing it everywhere. It didn’t fit the pattern. In fact, both my private $dayjob application that was afflicted with this, together with the whateverable set of IRC bots, had in common that they did a bunch of concurrency work and both spawned quite a lot of subprocesses using Proc::Async.

    But where does the pointer point to?

    Sometimes I look at a pointer and it’s obviously totally bogus (a small integer usually suggests this). But this one looked feasible; it was relatively similar to the addresses of other valid pointers. But where exactly does it point to?

    There are only a few places that a GC-managed object can live. They are:

    So, it would be very interesting to know if the pointer was into one of those. Now, I could just go examining it in the debugger, but with a dozen running threads, that’s tedious as heck. Laziness is of course one of the virtues of a programmer, so I wrote a function to do the search for me. Another re-compile, reproducing the bug in GDB again, and then calling that routine from the debugger told me that the pointer was into the tospace of another thread.

    Unfortunately, thinking is now required

    Things get just a tad mind-bending here. Normally, when a program is running, if we see a pointer into fromspace we know we’re in big trouble. It means that the pointer points to where an object used to be, but was then moved into either tospace or the old generation. But when we’re in the middle of a GC run, the two spaces are flipped. The old tospace is now fromspace, the old fromspace becomes the new tospace, and we start evacuating living objects in to it. The space left at the end will then be zeroed later.

    I should mention at this point that the crash only showed up a fraction of the time in my application. The vast majority of the time, it ran just fine. The odd time, however, it would panic – usually over a zeroed thread owner, but sometimes over a junk value being in the thread owner too. This all comes down to timing: different thread are working on GC, in different runs of the program they make progress at different paces, or get head starts, or whatever, and so whether the zeroing of the unused part of tospace happened or not yet will vary.

    But wait…why didn’t it catch the problem even sooner?

    When the MVM_GC_DEBUG flag is turned on, it introduces quite a few different sanity checks. One of them is in MVM_ASSIGN_REF, which happens whenever we assign a reference to one object into another. (The reason we don’t simply use the C assignment operator for that is because the inter-generational write barrier is needed.) Here’s how it looks:

    #if MVM_GC_DEBUG
    #define MVM_ASSIGN_REF(tc, update_root, update_addr, referenced) \
        { \
            void *_r = referenced; \
            if (_r && ((MVMCollectable *)_r)->owner == 0) \
                MVM_panic(1, "Invalid assignment (maybe of heap frame to stack frame?)"); \
            MVM_ASSERT_NOT_FROMSPACE(tc, _r); \
            MVM_gc_write_barrier(tc, update_root, (MVMCollectable *)_r); \
            update_addr = _r; \
        }
    #else
    #define MVM_ASSIGN_REF(tc, update_root, update_addr, referenced) \
        { \
            void *_r = referenced; \
            MVM_gc_write_barrier(tc, update_root, (MVMCollectable *)_r); \
            update_addr = _r; \
        }
    #endif
    

    Once again, the debug version does some extra checks. Those reading carefully will have spotted MVM_ASSERT_NOT_FROMSPACE in there. So, if we used this macro to assign to the ->outer that had the outdated pointer, why did it not trip this check?

    It turns out, because it only cared about checking if it was in fromspace of the current thread, not all threads. (This is in turn because the GC debug bits only really get any love when I’m hunting a GC bug, and once I find it then they go back in the drawer until next time around.) So, I enriched that check and…the bug hunt came to a swift end.

    Right back to the naughty deed

    The next time I caught it under the debugger was not at the point that the bad ->outer assignment took place. It was even earlier than that – lo and behold, inside of some of the guts that power Proc::Async. Once I got there, the problem was clear and fixed in a minute. The problem was that the callback pointer was not rooted while an allocation took place. The function MVM_repr_alloc_init can trigger GC, which can move the object pointed to by callback. Without an MVMROOT to tell the GC where the callback pointer is so it can be updated, it’s left pointing to where the callback used to be.

    So, bug fixed, but you may still be wondering how exactly this bug could have led to a bad ->outer pointer in a callframe some way down the line. Well, callback is a code object, and code objects point to an outer scope (it’s actually code objects that we clone to make closures). Since we held on to an outdated code object pointer, it in turn would point to an outdated pointer to the outer frame it closed over. When we invoked callback, the outer from the code object would be copied to be the outer of the call frame. Bingo.

    Less is Moar

    The hard part about GCs is not just building the collector itself. It’s that collectors bring invariants that are to be upheld, and a momentary lapse in concentration by somebody writing or reviewing a patch can let a bug like this slip through. At least 95% of the time when I handwavily say, “it was a GC bug”, what I really mean was “it was a bug that arose because some code didn’t play by the rules the GC requires”. A comparatively tiny fraction of the time, there’s actually something wrong in the code living under src/gc/.

    People sometimes ask me about my plans for the future of MoarVM. I often tell them that I plan for there to be less of it. In this case, the code with the bug is something that I hope we’ll eventually write in, say, NQP, where we don’t have to worry about low-level details like getting write barriers correct. It’s just binding code to libuv, a C library, and we should be able to do that using the MoarVM native calling support (which is likely mature enough by now). Alas, that also has its own set of costs, and I suspect we’d need to improve native calling performance to not come out at a measurable loss, and that means teaching the JIT to emit native calls, but we only JIT on x64 so far. “You’re in a maze of twisty VM design trade-offs, and their funny smells are all alike.”


    6guts: Perl 6 is biased towards mutators being really simple. That’s a good thing.

    Published by jnthnwrthngtn on 2016-11-25T01:09:33

    I’ve been meaning to write this post for a couple of years, but somehow never quite got around to it. Today, the topic of mutator methods came up again on the #perl6 IRC channel, and – at long last – conincided with me having the spare time to write this post. Finally!

    At the heart of the matter is a seemingly simple question: why does Perl 6 not have something like the C# property syntax for writing complex setters? First of all, here are some answers that are either wrong or sub-optimal:

    Back to OO basics

    The reason the question doesn’t have a one-sentence answer is because it hinges on the nature of object orientation itself. Operationally, objects consist of:

    If your eyes glazed over on the second bullet point, then I’m glad you’re reading. If I wasn’t trying to make a point, I’d have simply written “a mechanism for calling a method on an object”. So what is my point? Here’s a quote from Alan Kay, who coined the term “object oriented”:

    I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea. The big idea is “messaging”…”

    For years, I designed OO systems primarily thinking about what objects I’d have. In class-based languages, this really meant what classes I’d have. How did I figure that out? Well, by thinking about what fields go in which objects. Last of all, I’d write the methods.

    Funnily enough, this looks very much like procedural design. How do I build a C program? By modeling the state into various structs, and then writing functions work with with those structs. Seen this way, OO looks a lot like procedural. Furthermore, since OO is often taught as “the next step up” after procedural styles of programming, this way of thinking about objects is extremely widespread.

    It’s little surprise, then, that a lot of OO code in the wild might as well have been procedural code in the first place. Many so-called OO codebases are full of DTOs (“Data Transfer Objects”), which are just bundles of state. These are passed to classes with names like DogManager. And a manager is? Something that meddles with stuff – in this case, probably the Dog DTO.

    Messaging thinking

    This is a far cry from how OO was originally conceived: autonomous objects, with their own inner state, reacting to messages received from the outside world, and sending messages to other objects. This thinking can be found today. Of note, it’s alive and well in the actor model. These days, when people ask me how to get better at OO, one of my suggestions is that they take a look at actors.

    Since I grasped that the messages are the important thing in OO, however, the way I design objects has changed dramatically. The first question I ask is: what are the behaviors? This in turn tells me what messages will be sent. I then consider the invariants – that is, rules that the behaviors must adhere to. Finally, by grouping invariants by the state they care about, I can identify the objects that will be involved, and thus classes. In this approach, the methods come first, and the state comes last, usually discovered as I TDD my way through implementing the methods.

    Accessors should carry a health warning

    An accessor method is a means to access, or mutate, the state held within a particular attribute of an object. This is something I believe we should do far more hesitantly than is common. Objects are intended to hide state behind a set of interesting operations. The moment the underlying state model is revealed to the outside world, our ability to refactor is diminished. The world outside of our object couples to that view of it, and it becomes far too tempting to put operations that belong inside of the object on the outside. Note that a get-accessor is a unidirectional coupling, while a mutate-accessor implies a bidirectional (and so tighter) coupling.

    But it’s not just refactoring that suffers. Mutable state is one of the things that makes programs difficult to understand and reason about. Functional programming suggests abstinence. OO suggests you just stick to a pint or two, so your side-effects will be at least somewhat less obnoxious. It does this by having objects present a nice message-y view to the outside world, and keeping mutation of state locked up inside of objects. Ideas such as value objects and immutable objects take things a step further. These have objects build new objects that incorporate changes, as opposed to mutating objects in place. Perl 6 encourages these in various ways (notice how clone lets you tweak data in the resulting object, for example).

    Furthermore, Perl 6 supports concurrent and parallel programming. Value objects and immutable objects are a great fit for that. But what about objects that have to mutate their state? This is where state leakage will really, really, end up hurting. Using OO::Monitors or OO::Actors, turning an existing class into a monitor (method calls are synchronous but enforce mutual exclusion) or an actor (method calls are asynchronous and performed one at a time on a given object) is – in theory – easy. It’s only that easy, however, if the object does not leak its state, and if all complex operations on the object are expressed as a single method. Contrast:

    unless $seat.passenger {
        $seat.passenger = $passenger;
    }
    

    With:

    $seat.assign-to($passenger);
    

    Where the method does:

    method assign-to($passenger) {
        die "Seat already taken!" if $!passenger;
        $!passenger = $passenger;
    }
    

    Making the class of which $seat is an instance into a monitor won’t do a jot of good in the accessor/mutator case; there’s still a gaping data race. With the second approach, we’d be safe.

    So if mutate accessors are so bad, why does Perl 6 have them at all?

    To me, the best use of is rw on attribute accessors is for procedural programming. They make it easy to create mutable record types. I’d also like to be absolutely clear that there’s no shame in procedural programming. Good OO design is hard. There’s a reason Perl 6 has sub and method, rather than calling everything a method and then coining the term static method, because subroutine sounds procedural and “that was the past”. It’s OK to write procedural code. I’d choose to deal with well organized procedural code over sort-of-but-not-really-OO code any day. OO badly used tends to put the moving parts further from each other, rather than encapsulating them.

    Put another way, class is there to serve more than one purpose. As in many languages, it doubles up as the thing used for doing real OO programming, and a way to define a record type.

    So what to do instead of a fancy mutator?

    Write methods for semantically interesting operations that just happen to set an attribute among their various other side-effects. Give the methods appropriate and informative names so the consumer of the class knows what they will do. And please do not try to hide complex operations, potentially with side-effects like I/O, behind something that looks like an assignment. This:

    $analyzer.file = 'foo.csv';
    

    Will lead most readers of the code to think they’re simply setting a property. The = is the assignment operator. In Perl 6, we make + always mean numeric addition, and pick ~ to always mean string concatenation. It’s a language design principle that operators should have predictable semantics, because in a dynamic language you don’t statically know the types of the operands. This kind of predictability is valuable. In a sense, languages that make it easy to provide custom mutator behavior are essentially making it easy to overload the assignment operator with additional behaviors. (And no, I’m not saying that’s always wrong, simply that it’s inconsistent with how we view operators in Perl 6.)

    By the way, this is also the reason Perl 6 allows definition of custom operators. It’s not because we thought building a mutable parser would be fun (I mean, it was, but in a pretty masochistic way). It’s to discourage operators from being overloaded with unrelated and surprising meanings.

    And when to use Proxy?

    When you really do just want more control over something that behaves like an assignment. A language binding for a C library that has a bunch of get/set functions to work with various members of a struct would be a good example.

    In summary…

    Language design is difficult, and involves making all manner of choices where there is no universally right or wrong answer, but just trade-offs. The aim is to make choices that form a consistent whole – which is far, far, easier said than done because there’s usually a dozen different ways to be consistent too. The choice to dehuffmanize (that is, make longer) the writing of complex mutators is because it:


    Steve Mynott: Rakudo Star 2016.11 Release Candidate

    Published by Steve Mynott on 2016-11-20T14:01:22

    There is a Release Candidate for Rakudo Star 2016.11 (currently RC2) available at

    http://pl6anet.org/drop/

    This includes binary installers for Windows and Mac.

    Usually Star is released about every three months but last month's release didn't include a Windows installer so there is another release.

    I'm hoping to release the final version next weekend and would be grateful if people could try this out on as many systems as possible.

    Any feedback email steve *dot* mynott *at* gmail *dot* com

    Full draft announce at

    https://github.com/rakudo/star/blob/master/docs/announce/2016.11.md

    brrt to the future: A guide through register allocation: Introduction

    Published by Bart Wiegmans on 2016-11-06T10:29:00

    This is the first post in what I intend to be a series on the register allocator for the MoarVM JIT compiler. It may be a bit less polished than usual, because I also intend to write more of these posts than I have in the past few months.

    The main reason to write a register allocator is that it is needed by the compiler. The original 'lego' MoarVM JIT didn't need one, because it used what is called a 'memory-to-memory' model, meaning that every operation is expected to move operands from and to memory. In this it follows closely the behavior of virtually every other interpreter existing and especially that of MoarVM. However, many of these memory operations are logically redundant (for example, when storing and immediately loading an intermediate value, or loading the same value twice). Such redundancies are inherent to a memory-to-memory code model. In theory some of that can be optimized away, but in practice that involves building an unreasonably complicated state machine.

    The new 'expression' JIT compiler was designed with the explicit (well, explicit to me, at least) goals of enabling optimization and specialization of machine code. That meant that a register-to-register code model was preferable, as it makes all memory operations explicit, which in turn enables optimization to remove some of them. (Most redundant 'load' operations can already be eliminated, and I'm plotting a way to remove most redundant 'store' operations, too). However, that also means the compiler must ensure that values can fit into the limited register set of the CPU, and that they aren't accidentally overwritten (for example as a result of a subroutine call). The job of the register allocator is to translate virtual registers to physical registers in a given code segment. This may involve modifying the original code by inserting load, store and copy operations.

    Register allocation is known as a hard problem in computer science, and I think there are two reasons for that. The first reason is that finding the optimal allocation for a code segment is (probably) NP-complete. (NP-complete basically means that you have to consider all possible solutions in order to find the one you are after. A common feature of NP-complete problems is that the effect of a local choice on the global solution cannot be fully predicted). However, for what I think are excellent reasons, I can sidestep most of that complexity using the 'linear scan' register allocation algorithm. The details of that algorithm are subject of a later post.

    The other reason that register allocation is hard is that the output code must meet the demanding specifications of the target CPU. For instance, some instructions take input only from specific registers, and some implicitly overwrite other registers. Calling conventions can also present a significant source of complexity as values must be placed in the right registers (or on the right stack locations) where the called function may expect them. So the register allocator must somehow encode these specific demands and ensure they are not violated.

    Now that I've introduced register allocation, why it is needed, and what the challenges are, the next posts can begin to describe the solutions that I'm implementing.