[Barrelfish-users] Scheduling and mallable tasks

Georgios Varisteas yorgos at kth.se
Wed Aug 10 15:53:51 CEST 2011

Hi Simon,

Thank you for all the pointers. They led to quite a few insights.

Regarding phase-lock gang scheduling, it is cool indeed but I'm planning to investigate a different approach for now. My first goal is to try and avoid domain overlapping for multiple applications by creating a framework where all tasks are malleable. Something in the steps of "Adaptive Work Stealing with Parallelism Feedback" (http://citeseer.ist.psu.edu/viewdoc/summary?doi= When that is inadequate (i.e. in the absence of enough cores), phase-lock gang scheduling could be used to minimize context switching.

My main concern is keeping the complexity of such a mechanism low, to avoid wasting cpu-time scheduling.

Best regards,
Georgios Varisteas

From: Simon Peter [speter at inf.ethz.ch]
Sent: Wednesday, August 10, 2011 14:11
To: Georgios Varisteas
Cc: barrelfish-users at lists.inf.ethz.ch; Timothy Roscoe
Subject: Re: [Barrelfish-users] Scheduling and mallable tasks

Hi Georgios,

You are correct that there are two kernel-level schedulers (RR and
RBED). The RR scheduler is deprecated and only kept for compatibility
with some of our less well-equipped build targets. You can read up on
RBED here:


At user-level, we have a simple round-robin thread scheduler.

There is a little bit of support for inter-core scheduling in the
resource controller in the monitor (cf. usr/monitor/resource_ctrl.c).
For your extensions, I suggest you start here. The concepts used are
presented in this paper:


This enables you to e.g. do phase-locked gang scheduling, which is
pretty cool.

Mothy will have to advise on the code contribution policy.

Hope this helps,

On 10.08.2011 11:45, Georgios Varisteas wrote:
> Hi,
> A few words about what I'm working on here at KTH, Sweden. After fully porting our home-grown work-stealing library called WOOL, I've moved on investigating domain scheduling abstractions. In other words dynamically modifying the number of cores assigned to each application, according to actual usage and requirements. More simply put space-sharing.
> After reading the documentation and going through the code a bit, I would like to verify a few things. According to the docs, on the single core level, there is a round-robin scheduler activating circularly one dispatcher after the other. Also, there is a RBED scheduler implementation, which is selected over RR in the generated Config.hs. I would really appreciate any detailed insight on how things actually work at this point. Pointers on where to look for more info would also suffice.
> So far, I haven't found any hints on a mechanism that automatically and intelligently modifies domains during execution or any inter-core scheduler for that matter. If that is true, where in the tree should I start implementing my scheduler? How can it cooperate with the existing API? Is there anything not obvious I should look out for, considering that I do not have complete understanding of the system yet.
> As a sidenote, what is the process/policy for contributing to the repository?
> Best regards,
> Georgios Varisteas
> _______________________________________________
> Barrelfish-users mailing list
> Barrelfish-users at lists.inf.ethz.ch
> https://lists.inf.ethz.ch/mailman/listinfo/barrelfish-users

More information about the Barrelfish-users mailing list