Jumat, 15 Juni 2018

Sponsored Links

src: i.ytimg.com

Memory management is a form of resource management applied to the computer's memory. An important requirement of memory management is to provide ways to dynamically allocate portions of memory to programs as per their request, and release them for reuse when they are no longer needed. This is very important for advanced computer systems where more than one process can be done at any time.

Several methods have been designed to improve the effectiveness of memory management. The virtual memory system separates the memory addresses used by a process from the actual physical address, enabling process separation and increasing the size of the virtual address space beyond the amount of available RAM using paging or swapping to secondary storage. The quality of virtual memory managers can have a broad effect on overall system performance.


Video Memory management



Detail

Application-level memory management is generally categorized as automatic memory management, typically involving garbage collection, or manual memory management.

Dynamic memory allocation

The task of fulfilling an allocation request consists of finding an unused memory block of adequate size. Memory requests are filled by allocating portions of large amounts of memory called heap or free stores . At certain times, some parts of the pile are used, while some are "free" (not used) and thus available for future allocations.

Some problems make implementation difficult, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which cancels its use for allocation requests. Allocator metadata can also inflate the size (individually) small allocations. This is often managed by chunking. The memory management system must track a tremendous allocation to ensure that they do not overlap and no memory is "lost" (ie no "memory leak").

Efficiency

The specific dynamic memory allocation algorithm applied can significantly affect performance. A study conducted in 1994 by Digital Equipment Corporation illustrates the overhead involved for various allocators. The lowest average instruction path length required to allocate a single memory slot is 52 (as measured by the instruction level profiler on various software).

Implementation

Because the exact location of the allocation is not known before, the memory is accessed indirectly, usually through a reference pointer. The specific algorithm used to set the memory area and allocate and deallocate chunks is interlinked with the kernel, and can use any of the following methods:

Fixed size block allocation

Fixed-size block allocations, also called memory pool allocations, use a fixed-size memory block list of free (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but suffer from fragmentation, especially with long memory addresses. However, due to significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation/de allocation and are often used in video games.

Buddy blocking

In this system, memory is allocated to multiple batches of memory, not just one, where each pool represents memory blocks of a certain power of two in size, or blocks of some other convenient size development. All blocks of a certain size are stored in a sorted list or linked tree and all new blocks created during allocation are added to each memory pool for later use. If a smaller size is requested rather than available, the smallest available size is selected and shared. One of the resulting parts is selected, and the process is repeated until the request is completed. When a block is allocated, the allocator will start with the smallest large block to avoid unnecessary block destruction. When a block is released, it is compared to its buddy. If both are free, they are combined and placed in a larger shared companion list.

Allocation of plates
Stack allocation

Auto variables

In many programming language implementations, all variables declared in a procedure (subroutine, or function) are local to that function; the runtime environment for the program automatically allocates memory for these variables on the program's execution entry to the procedure, and automatically releases the memory when the procedure is exited. Special declarations may allow local variables to maintain values ​​between invocation procedures, or allow local variables to be accessed by other procedures. The automatic allocation of local variables allows recursion, to a depth limited by available memory.

Garbage collection

Garbage collection is a strategy to automatically detect the memory allocated to objects that are no longer usable in the program, and restore the memory allocated to the free memory location set. This method is different from "manual" memory management where the programmer explicitly encodes the memory query and the memory release in the program. While automatic garbage has the advantage of reducing the workload of programmers and preventing some types of memory allocation bugs, garbage collection does require its own memory resources, and can compete with application programs for processor time.

Maps Memory management



System with virtual memory

Virtual memory is a method to separate the organization's memory from physical devices. The application operates memory via the virtual address . Whenever attempts to access stored data are created, virtual memory data orders translate virtual addresses to physical addresses. In this way the addition of virtual memory allows granular control over the system memory and access methods.

In a virtual memory system, the operating system limits how a process can access memory. This feature, called memory protection, can be used to not allow the process to read or write to memory that is not allocated to it, preventing malicious code or not working in one program from interfering with other operations.

Although the memory allocated to a particular process is usually isolated, the process sometimes needs to be able to share information. Shared memory is one of the fastest techniques for inter-process communication.

Source of the article : Wikipedia

Comments
0 Comments