[Barrelfish-users] Intel SCC latency measurements for MPB operations
troscoe at inf.ethz.ch
Wed Mar 2 14:30:38 CET 2011
The data we have right now (and the methodology) is listed in the
Barrelfish Technical Note on the SCC port (which is part of the
distribution); the measurements were on a Copper Ridge board which is
probably not the one you have access to, but we've not seen any
significant differences between that and our Rocky Lake board here.
On Barrelfish we don't use RCCE - we have a RCCE-compatible library
built over the Barrelfish SCC interconnect driver, which talks
directly to the hardware.
At Wed, 2 Mar 2011 14:07:16 +0200, Konstantin Zertsekel <zertsekel at gmail.com> wrote:
> Prof. Timothy, thanks for the answer.
> > I'm not sure what communication stack your graphs are using (whether
> > this is bare-metal RCCE, RCCE over Linux TCP, etc.).
> As I get it, the test was conducted on RCCE over Linux (this assumption
> demands a proof, though...).
> > On Barrelfish we require a trap to kernel mode for inter-core messages,
> > which in practice dominates the time taken to access the MPB (once
> > you're in the kernel, we can transfer a cache line to another core's
> > on-time MPB in a hundred clocks or so as Intel advertise).
> Did you perform some measurement like RCCE PingPong to ascertain
> the overhead of the kernel mode trap and the time of the actual cache line
> to another core's on-time MPB?
> According to the article from 1996 "A Performance Comparison of UNIX
> Operating Systems on the Pentium"
> (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.5759) where
> Intel Pentium P54C-100MHz was
> used for the measurements, getpid() system calls costs ~2.5 microsecond.
> Now, the default tile frequency on SCC is 533MHz (if Router frequency is
> 800MHz), but the order
> of context switch / system call may still dominate the time taken to access
> the MPB...
> Thanks, KostaZ.
More information about the Barrelfish-users