-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: Skeleton for DMA layer #306
base: main
Are you sure you want to change the base?
RFE: Skeleton for DMA layer #306
Conversation
} | ||
} | ||
|
||
pub fn map_dma_ranges( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this is a draft ... could you add a doc comment so that folks know the intended contract and usage here?
} | ||
|
||
/// Adds a new client to the list and stores its pinning threshold | ||
fn register_client(&self, client: &Arc<DmaClient>, threshold: usize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a need to have per-client bounce buffers?
} | ||
|
||
// Trait for the DMA interface | ||
pub trait DmaInterface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you envision that this would replace other uses of bounce buffering? (for example, copying from private memory into shared memory for isolated VMs, or when the block disk bounces for arm64 guests)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i do envision that.
Policy on GlobalDmaManager can be control the behavior system wide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. How do you envision handling the case:
- Have a VTL0 VM where sometimes memory needs to be pinned.
- This particular transaction's memory must be placed in a bounce buffer, even if pinning would otherwise succeed?
(I'm thinking about the block device driver here, where it would never want to pin memory - the kernel doesn't know about the VTL0 addresses.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this offline. map_dma_ranges
will take additional per-transaction parameters. For example, some clients may want all transactions to be placed into the bounce buffer.
static GLOBAL_DMA_MANAGER: OnceCell<Arc<GlobalDmaManager>> = OnceCell::new(); | ||
|
||
/// Global DMA Manager to handle resources and manage clients | ||
pub struct GlobalDmaManager { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What settings will this manager have? Which of those do you expect to expose in Vtl2Settings?
No description provided.