[Barrelfish-users] Fwd: How to reclaim the memory allocated by frame_alloc?

Shi Jinghao (史经浩) jhshi at cs.hku.hk
Thu Mar 29 12:14:58 CEST 2012


I see. So you mean the process of cap transfer is expansive because it
involves many kernel/monitor
operations on both side, but not because the frame itself will be
**copied** to receiver. Is my understanding right?

Since my message size is variable, sending it as a fixed-size byte array
may waste the traffic. I'll see if bulk
transfer will solve my pain.

Thanks for your hints, Really appreciate that.

Regards,
Jinghao

2012/3/29 Baumann Andrew <andrewb at inf.ethz.ch>

>  Hi,****
>
> ** **
>
> Sending a cap is expensive, because it is just a reference to some
> protected kernel state, so the transfer goes out of band and involves work
> in the kernel and monitor on both sender and receiver cores. If you know
> you have shared memory hardware, the best thing to do is use a cap transfer
> to establish a shared region, and then send small messages to coordinate
> the use of that shared region. This is exactly what the bulk transport code
> is supposed to do. Although the current API is a bit lacking, it sounds
> like it might be enough for what you need.****
>
> ** **
>
> Hope this helps,****
>
> Andrew****
>
> ** **
>
> *From:* jhshi89 at gmail.com [mailto:jhshi89 at gmail.com] *On Behalf Of *Shi
> Jinghao (???)
> *Sent:* Tuesday, 27 March, 2012 22:48
> *To:* Baumann Andrew
> *Cc:* barrelfish-users at lists.inf.ethz.ch
> *Subject:* Re: [Barrelfish-users] Fwd: How to reclaim the memory
> allocated by frame_alloc?****
>
> ** **
>
> Hi, ****
>
> ** **
>
> Yes, at first I was trying to send the message as a byte array as you
> mentioned. But since my typical ****
>
> message size is about 4~8 KB, and I found that Flounder generates a quite
> long sending function (~15k lines ****
>
> of code, I don't remember). This is because Flounder will divide the
> message body into small fragments and ****
>
> send them in a switch statement... That's why I choose to send the message
> cap.****
>
> ** **
>
> Is sending cap a expensive operation? I don't know. I thought I just sent
> the capref itself (a few bytes), but not****
>
> sending the whole frame which the capref points to. Do you mean when I
> send a capref of a 8KB frame, there will****
>
> be a actually a 8KB memory copy underlying ? If true, then I may really
> need considering using the bulk transfer****
>
> library.****
>
> ** **
>
> Thanks,****
>
> Jinghao****
>
> ** **
>
> 2012/3/28 Baumann Andrew <andrewb at inf.ethz.ch>****
>
> Hi,****
>
>  ****
>
> This is a very inefficient way to send a message. Sending the cap is an
> expensive operation. If the messages you are sending are relatively small
> (e.g. a few hundred bytes), you can send them as a byte array (e.g. uint8
> buf[len] in a flounder spec). If the messages are large, you should
> probably look into using the bulk transport library, which uses shared
> memory under the covers, but is more efficient about reusing the same
> memory and not transferring capabilities with each message.****
>
>  ****
>
> To answer your direct question, it looks like what you are doing is
> correct, but I doubt that the system correctly reclaims all the memory, and
> I know that revoke is broken across cores in the current tree. You may also
> find it is easier to unmap and delete the cap on both cores, rather than
> revoking it on the sender side.****
>
>  ****
>
> Andrew****
>
>  ****
>
> *From:* Shi Jinghao (史经浩) [mailto:jhshi at cs.hku.hk]
> *Sent:* Tuesday, 27 March, 2012 6:28
> *To:* barrelfish-users at lists.inf.ethz.ch
> *Subject:* [Barrelfish-users] Fwd: How to reclaim the memory allocated by
> frame_alloc?****
>
>  ****
>
> Just realize that I sent the mail using gmail, which seems be filtered by
> this maillist. Please ignore this****
>
> if you've seen it.****
>
>  ****
>
> Regards,****
>
> Jinghao****
>
> ---------- Forwarded message ----------
> From: *Shi Jinghao* <jhshi89 at gmail.com>
> Date: Tue, Mar 27, 2012 at 9:23 PM
> Subject: How to reclaim the memory allocated by frame_alloc?
> To: barrelfish-users at lists.inf.ethz.ch
>
>
> Hi,****
>
>  ****
>
> I need to send message between cores in barrelfish (X86_32). And somehow I
> decide to send the capability ****
>
> of a frame that contains the message, since the message will have variable
> size. And I was wondering what's****
>
> the proper way to reclaim the message frame when it's useless.****
>
>  ****
>
> I create a message frame like this (details like error checking
> are omitted, also for simplicity, assume message****
>
> size is fixed (MSG_SIZE)):****
>
>  ****
>
>     // allocate a frame capability****
>
>     struct capref* msgcap = (struct capref*) malloc (sizeof(struct
> capref));****
>
>     frame_alloc(msgcap, MSG_SIZE, NULL);****
>
>     ****
>
>     // map to my address space so I can fill the message body****
>
>     struct my_message* msg;****
>
>     vspace_map_one_frame_attr((void**)&msg, MSG_SIZE, *msgcap,
> VREGION_FLAGS_READ_WRITE, NULL, NULL);****
>
>  ****
>
> Then I send the frame cap to another process (on a different core). ****
>
>  ****
>
> On receiver side, I get the message like this:****
>
>  ****
>
>     struct my_message* msg;****
>
>     vspace_map_one_frame_attr(&msg, MSG_SIZE, msgcap,
> VREGION_FLAGS_READ_WRITE, NULL, NULL);****
>
>  ****
>
> However, how am I supposed to reclaim the message? Based on my
> understanding, my current try is:****
>
>  ****
>
>     // on sender side****
>
>     cap_revoke(*msgcap);****
>
>     cap_destroy(*msgcap);****
>
>     free(msgcap);  // msgcap is allocated by malloc****
>
>     vspace_unmap(msg);****
>
>  ****
>
>     // on receiver side****
>
>     vspace_unmap(msg);****
>
>  ****
>
> But I don't know if this is the right way to do it. (Though no explicit
> problem occurs until now). Any suggestions on this?****
>
>  ****
>
> PS: on receiver side, vspace_map_on_frame_attr won't allocate any new
> memory, right? So after this, sender and receiver****
>
> will share the same physical memory. Please clarify if my understanding is
> right.****
>
>  ****
>
> Thanks for your reading.****
>
>  ****
>
> Regards,****
>
> Jinghao****
>
>  ****
>
> ** **
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.inf.ethz.ch/pipermail/barrelfish-users/attachments/20120329/c46858c8/attachment.html 


More information about the Barrelfish-users mailing list