Often an application needs additional memory for the temporary storage of data. For example, the C Programming language allows the programmer to use the MALLOC (memory allocate) function to grab a chunk of memory suitable for the applications needs. Failure to release the memory after it is used by using the FREE function can result in problems. It is called dynamic memory allocation because the memory is allocated at run-time, as needed. Unlike variables created within functions - the memory is not allocated on the processor stack, instead, when using MALLOC or the 'new' keyword, memory is allocated in the applications virtual address space.
A Static memory allocation can be defined as a predefined memory allocation by the programmer independent and non-changable by the user during execution of the program but where as Dynamic memory allocation allows the user of the program to instruct the program,how much memory he needs,during run time...
Dynamic memory allocation is where your program requests the allocation and use of memory from the operating system via the run-time library. In old C parlance, we would use the malloc() and free() family of functions. In newer C++ parlance, we use the new and delete operators.
There are two types of dynamic memory allocation...
The first is stack or automatic allocation. The malloc, free, new, and delete methods are not used by the programmer in this case. You simply declare the variables in the block of code and the compiler generates code to allocate that memory from the stack and to release it when the block of code is exited. This is the most common form used in functions, for both formal parameters and local variables, as well as for the stack frame itself.
The second is heap or explicit allocation. This is where you write code to invoke the allocator, instantiate the object, initialize it, use it, and then delete it. Since the object exists on the heap instead of the stack, its lifetime persists beyond the block of code in which it was created, unless that block of code also deleted it. This leads to a common error called memory leakage, wherein an object is allocated but never deleted and, in a long running program, can cause failure due to memory exhaustion. The solution to that problem is simply to keep track of your allocated objects and delete them when you no longer need them.
Another problem, less well understood, is memory fragmentation. Since you are allocating memory out of the heap, and returning it to the heap, at different points in time, the heap can grow fragmented, with many chunks being allocated or unallocated at any one point in time. This can lead to a problem where a request for allocation fails, even though there is enough free memory to meet that request, but there is not enough free memory in one chunk to meet that request.
Do not misunderstand this, and think that the virtual memory hardware in the computer will save you. Yes, the physical memory is always fragmented, and the hardware page table fixes that. I'm talking about virtual address space fragmentation, and that is a problem that some programmers do not understand because they think they have unlimited address space. They don't.
The solution of that problem is often times difficult. In a long running program, such as a web server, it is a critical thing, and must be considered. In a short running program, you weigh your risks and sometimes you can ignore the issue - sometimes you can even ignore deleting objects when you are done with them because they will be automatically deleted when your program exits - but that is poor programming practice - and I would give anyone that did that a poor grade.
The reason the solution of memory fragmentation is difficult is that the solution requires that, when a request arrives that cannot be met, the existing objects be moved around, essentially defragmenting the free space pool, but you cannot move an object while it is mutating or when some thread has its address in a private variable.
Many approaches have been taken. Smart pointers are one, where the heap keeps track of all pointers to movable memory and updates them when needed. Another is managed heap, such as in Java and .NET - in this case, no one has an address of an object - they only have handles - but you still need to deal with mutating objects in a multi-threaded environment.
Dynamic memory allocation has its positive sides as well as drawbacks. It works great when you know how to manage memory. It allows you to reserve space in memory which can reused by other program or variable later on. Static variables will not allow you to do that. But you have to be careful working with memory allocation because you can try accessing part of the memory used by OS in most cases it will cause "blue screen" (windows).
The advantage of dynamic over static memory allocation is twofold: first, it allows supporting applications and algorithms without predicting the exact amount of memory required for a particular problem. Such an algorithm can allocate the correct amount of memory at runtime, or "top up" the previously allocated memory, through dynamic memory allocation services. The second advantage is that the physical memory may be re-used over time: one algorithm may allocate a certain amount of memory at one time, return this memory to the heap later, and allocate another chunk of memory after this. The second allocation might re-use the same physical memory used with the first allocation.
The disadvantage of dynamic over static memory allocation is also twofold: first, dynamic memory incurs an administrative overhead and can fragment, so it might be is less efficient than static allocation. Second, dynamic memory consists of allocating the right amount of memory at the right time, using it correctly, and returning it to the pool (typically called the heap) as soon as possible. Failure to do so can cause a memory leak, which can be the cause of severe problems affecting the entire computer.
Dynamic memory allocation refers to the allocation of memory to an element at run-time. For eg: In linked lists, the number of elements held by the linked list is not known at compile time. Whenever the user requests to insert or delete an element from the list, memory is allocated/deallocated accordingly. For memory allocation in c, malloc, calloc and realloc functions are used while for deallocation, free is used.
We tend to avoid using the term "dynamic memory" as it doesn't actually tell us where that memory is being allocated, it only tells us that it is not being allocated statically. Non static memory could mean the free store (heap memory) but it could also mean the allocation is on the stack, so it is important we be clear what it is we are actually referring to when we talk about "dynamic memory".
Constants, global variables and static variables are always allocated in static memory (the program's data segment). Static memory is not dynamic memory because it is allocated at compile time and exists for as long as the program is running.
The call stack (or simply the stack) provides automatic storage for a function's formal arguments and local variables. When a function returns to its caller, that memory immediately falls from scope. That means we cannot return a reference to an object allocated on the stack because it would no longer be valid, but we can return its value. If we need to return a reference, then we must allocate the object we are referring to on the free store because free store memory is not scoped to the function and can therefore cross scopes.
Memory on the free store can be allocated in one of two ways. We can either request memory from the system as and when it is required or we can allocate "pools" of memory in advance of when we need it. The latter requires that we use a memory manager, but if we need to allocate and release memory frequently, then this can help improve performance because requesting memory from the system is a time-consuming operation.
In C we use malloc() or calloc() to request heap memory from the system and use free() to release heap memory back to the system. To achieve this we need to use a pointer variable to keep track of the start address of each allocation. The address can be passed to and returned from a function if necessary, so long as we keep track of its value, which means we must keep at least one pointer in scope at all times until the memory is released. If we fail to keep track of heap memory, we cannot release it to the system, thus creating a resource leak which cannot be recovered until the program terminates.
In C++ we use the new operator to allocate heap memory and the delete operator to release it. However, to avoid creating resource leaks, we make use of resource handles to manage the memory for us. That is, instead of using "naked" new operations in our code, we encapsulate the resource in a class where its constructors allocate the memory and its destructor releases it. In this way, memory is allocated when an object comes into scope and is automatically released when it falls from scope. This technique is central to the "resource acquisition is initialisation" (RAII) paradigm and make it possible to implement "smart" pointers such as std::unique (where the pointer "owns" the resource) and std::shared (where the pointer makes use of reference counting to provide automatic garbage collection).
In Java, we do not have to worry about heap or stack allocations since Java does not provide low-level facilities such as pointers. In Java we write code against a virtual machine which provides its own memory manager and garbage collection facility. All objects, including primitive types such as int, are implemented as objects and those objects behave much like a resource handle does in C++; they take care of their own memory resources. In reality it is the virtual machine that takes care of the memory so we don't have to worry so much about terms like stack and heap memory, we only need to worry about an object's scope. Hence Java programmers tend to refer to "dynamic memory" allocations without any regard to where that memory is physically allocated.
There are two types of memory allocations. 1. Static memory allocation 2. Dynamic memory allocation
Constructors are necessary to initialize classes. It allows to avoid to a lot of problems with unauthorized access of memory. Dynamic allocation makes possible allocation of memory during execution of program. If you do not use dynamic allocation, all required memory will be allocated during initialization phase (constructors are usually responsible for that). But you can't use more memory. Dynamic allocation was designed to overcome such problems.
Linked lists use dynamic memory allocation (also called "heap memory allocation", as the linked list is stored in heap memory).
Not freeing it when you no longer need the memory.
The main advantage of dynamic memory allocation is flexibility: the sizes of structures (or upper bounds on the sizes) do not need to be known in advance, so any size input that does not exceed available memory is easily handled. There are costs, however. Repeated calls to allocate and de-allocate memory place considerable strain on the operating system and can result in "thrashing" and decreased performance. In addition, one has to be very careful to "clean up" and de-allocate any memory that is allocated dynamically, to avoid memory leaks. The general rule of thumb is, if you can allocate memory statically, do it, because the result will probably be faster code that is easier to debug. But if you need to handle wide-ranging input sizes, then dynamic memory allocation is the way to do it.
There are two types of memory allocations. 1. Static memory allocation 2. Dynamic memory allocation
Static Memory Allocation: Allocating the total memory requirements that a data structure might need all at once without regard for the actual amount needed at execution time. Dynamic Memory Allocation: The opposite strategy of static memory allocation - Dynamic Memory Allocation, involves allocating memory as-needed.
Memory allocation is not necessary to display a matrix.
Constructors are necessary to initialize classes. It allows to avoid to a lot of problems with unauthorized access of memory. Dynamic allocation makes possible allocation of memory during execution of program. If you do not use dynamic allocation, all required memory will be allocated during initialization phase (constructors are usually responsible for that). But you can't use more memory. Dynamic allocation was designed to overcome such problems.
Linked lists use dynamic memory allocation (also called "heap memory allocation", as the linked list is stored in heap memory).
Not freeing it when you no longer need the memory.
Dynamic memory allocation
Static memory allocation occurs at compile time where as dynamic memory allocation occurs at run time.
alloc :- to allocate memory. calloc :- to free the memory.
Static storage allocation is when memory is allocated at compile time and remains constant throughout the program execution. Dynamic storage allocation occurs at runtime and allows memory to be allocated and deallocated as needed during program execution.
The main advantage of dynamic memory allocation is flexibility: the sizes of structures (or upper bounds on the sizes) do not need to be known in advance, so any size input that does not exceed available memory is easily handled. There are costs, however. Repeated calls to allocate and de-allocate memory place considerable strain on the operating system and can result in "thrashing" and decreased performance. In addition, one has to be very careful to "clean up" and de-allocate any memory that is allocated dynamically, to avoid memory leaks. The general rule of thumb is, if you can allocate memory statically, do it, because the result will probably be faster code that is easier to debug. But if you need to handle wide-ranging input sizes, then dynamic memory allocation is the way to do it.
Single-user contiguous Fixed Partitions Dynamic Partitions Relocatable Dynamic Partitions Paged Memory Allocation Demand Paging Working Set Segmented Memory Allocation Segmented/Demand Page Memory Allocation -------Taken from "Understanding Operating Systems. 6th edition pg 99