pub struct Talc<O: OomHandler> {
pub oom_handler: O,
/* private fields */
}Expand description
The Talc Allocator!
One way to get started:
- Construct with
new(supply [ErrOnOom] to ignore OOM handling). - Establish any number of heaps with
claim. - Call
lockto get a [Talck] which supports theGlobalAllocandAllocatortraits.
Check out the associated functions new, claim, lock, extend, and truncate.
Fields§
§oom_handler: OThe user-specified OOM handler.
Its state is entirely maintained by the user.
Implementations§
Source§impl<O: OomHandler> Talc<O>
impl<O: OomHandler> Talc<O>
Sourcepub unsafe fn malloc(&mut self, layout: Layout) -> Result<NonNull<u8>, ()>
pub unsafe fn malloc(&mut self, layout: Layout) -> Result<NonNull<u8>, ()>
Allocate a contiguous region of memory according to layout, if possible.
§Safety
layout.size() must be nonzero.
Sourcepub unsafe fn free(&mut self, ptr: NonNull<u8>, layout: Layout)
pub unsafe fn free(&mut self, ptr: NonNull<u8>, layout: Layout)
Free previously allocated/reallocated memory.
§Safety
ptr must have been previously allocated given layout.
Sourcepub unsafe fn grow(
&mut self,
ptr: NonNull<u8>,
old_layout: Layout,
new_size: usize,
) -> Result<NonNull<u8>, ()>
pub unsafe fn grow( &mut self, ptr: NonNull<u8>, old_layout: Layout, new_size: usize, ) -> Result<NonNull<u8>, ()>
Grow a previously allocated/reallocated region of memory to new_size.
§Safety
ptr must have been previously allocated or reallocated given layout.
new_size must be larger or equal to layout.size().
Sourcepub unsafe fn grow_in_place(
&mut self,
ptr: NonNull<u8>,
old_layout: Layout,
new_size: usize,
) -> Result<NonNull<u8>, ()>
pub unsafe fn grow_in_place( &mut self, ptr: NonNull<u8>, old_layout: Layout, new_size: usize, ) -> Result<NonNull<u8>, ()>
Attempt to grow a previously allocated/reallocated region of memory to new_size.
Returns Err if reallocation could not occur in-place.
Ownership of the memory remains with the caller.
§Safety
ptr must have been previously allocated or reallocated given layout.
new_size must be larger or equal to layout.size().
Sourcepub unsafe fn shrink(
&mut self,
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
)
pub unsafe fn shrink( &mut self, ptr: NonNull<u8>, layout: Layout, new_size: usize, )
Shrink a previously allocated/reallocated region of memory to new_size.
This function is infallible given valid inputs, and the reallocation will always be done in-place, maintaining the validity of the pointer.
§Safety
ptrmust have been previously allocated or reallocated givenlayout.new_sizemust be smaller or equal tolayout.size().new_sizeshould be nonzero.
Sourcepub const fn new(oom_handler: O) -> Self
pub const fn new(oom_handler: O) -> Self
Returns an uninitialized Talc.
If you don’t want to handle OOM, use [ErrOnOom].
In order to make this allocator useful, claim some memory.
Sourcepub unsafe fn get_allocated_span(&self, heap: Span) -> Span
pub unsafe fn get_allocated_span(&self, heap: Span) -> Span
Sourcepub unsafe fn claim(&mut self, memory: Span) -> Result<Span, ()>
pub unsafe fn claim(&mut self, memory: Span) -> Result<Span, ()>
Attempt to initialize a new heap for the allocator.
Note:
- Each heap reserves a
usizeat the bottom as fixed overhead. - Metadata will be placed into the bottom of the first successfully established heap. It is currently ~1KiB on 64-bit systems (less on 32-bit). This is subject to change.
§Return Values
The resulting Span is the actual heap extent, and may
be slightly smaller than requested. Use this to resize the heap.
Any memory outside the claimed heap is free to use.
Returns Err where
- allocator metadata is not yet established, and there’s insufficient memory to do so.
- allocator metadata is established, but the heap is too small
(less than around
4 * usizefor now).
§Safety
- The memory within the
memorymust be valid for reads and writes, and memory therein (when not allocated to the user) must not be mutated while the allocator is in use. memoryshould not overlap with any other active heap.
§Panics
Panics if memory contains the null address.
Sourcepub unsafe fn extend(&mut self, old_heap: Span, req_heap: Span) -> Span
pub unsafe fn extend(&mut self, old_heap: Span, req_heap: Span) -> Span
Increase the extent of a heap. The new extent of the heap is returned, and will be equal to or slightly smaller than requested.
§Safety
old_heapmust be the return value of a heap-manipulation function of this allocator instance.- The entire
req_heapmemory but be readable and writable and unmutated besides that which is allocated so long as the heap is in use.
§Panics
This function panics if:
old_heapis too small or heap metadata is not yet allocatedreq_heapdoesn’t containold_heapreq_heapcontains the null address
A recommended pattern for satisfying these criteria is:
let mut heap = [0u8; 2000];
let old_heap = Span::from(&mut heap[300..1700]);
let old_heap = unsafe { talc.claim(old_heap).unwrap() };
// compute the new heap span as an extension of the old span
let new_heap = old_heap.extend(250, 500).fit_within((&mut heap[..]).into());
// SAFETY: be sure not to extend into memory we can't use
let new_heap = unsafe { talc.extend(old_heap, new_heap) };Sourcepub unsafe fn truncate(&mut self, old_heap: Span, req_heap: Span) -> Span
pub unsafe fn truncate(&mut self, old_heap: Span, req_heap: Span) -> Span
Reduce the extent of a heap. The new extent must encompass all current allocations. See below.
The resultant heap is always equal to or slightly smaller than req_heap.
Truncating to an empty Span is valid for heaps where no memory is
currently allocated within it.
In all cases where the return value is empty, the heap no longer exists.
You may do what you like with the heap memory. The empty span should not be
used as input to truncate, extend,
or get_allocated_span.
§Safety
old_heap must be the return value of a heap-manipulation function
of this allocator instance.
§Panics:
This function panics if:
old_heapdoesn’t containreq_heapreq_heapdoesn’t contain all the allocated memory inold_heap- the heap metadata is not yet allocated, see
claim
§Usage
A recommended pattern for satisfying these criteria is:
let mut heap = [0u8; 2000];
let old_heap = Span::from(&mut heap[300..1700]);
let old_heap = unsafe { talc.claim(old_heap).unwrap() };
// note: lock a `Talck` here otherwise a race condition may occur
// in between Talc::get_allocated_span and Talc::truncate
// compute the new heap span as a truncation of the old span
let new_heap = old_heap
.truncate(250, 300)
.fit_over(unsafe { talc.get_allocated_span(old_heap) });
// truncate the heap
unsafe { talc.truncate(old_heap, new_heap); }Source§impl<O: OomHandler> Talc<O>
impl<O: OomHandler> Talc<O>
Sourcepub const fn lock<R: RawMutex>(self) -> Talck<R, O>
pub const fn lock<R: RawMutex>(self) -> Talck<R, O>
Wrap in Talck, a mutex-locked wrapper struct using lock_api.
This implements the GlobalAlloc trait and provides
access to the Allocator API.
§Examples
use spin::Mutex;
let talc = Talc::new(ErrOnOom);
let talck = talc.lock::<Mutex<()>>();
unsafe {
talck.alloc(Layout::from_size_align_unchecked(32, 4));
}