Von Neumann Architecture
If we will go back in history, it is quite evident that the Von Neumann architecture was first published in John von Neumann’s report in June 30, 1945 and since then the same principle is being implemented for the storing of electronic computers. In this architecture for both instruction and data a single data path or bus is present. Therefore the CPU performs a single operation at a time. It either performs Read/Write operation on data, or fetches a set of instruction from memory. Hence instruction fetch and a data transfer operation cannot occur simultaneously by using a common bus.
The shared bus between the program memory and data memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the central processing unit (CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to be transferred to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every newer generation of CPU.
In Havard architecture separate storage and signal buses are provided for different set of instructions and data. This architecture has the entire data storage within the CPU and there is no access available for instruction storage as data. This architecture provides simultaneous access to an instructions and data stored inside internal buses of microcontroller.A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches and data access do not contend for a single memory pathway.
- Digital signal processors (DSPs) generally execute small, highly optimized audio or video processing algorithms. They avoid caches because their behavior must be extremely reproducible. The difficulties of coping with multiple address spaces are of secondary concern to speed of execution. Consequently, some DSPs feature multiple data memories in distinct address spaces to facilitate SIMD nd VLIW processing.
- Microcontrollers are characterized by having small amounts of program (flash memory) and data (SRAM) memory, and take advantage of the Harvard architecture to speed processing by concurrent instruction and data access. The separate storage means the program and data memories may feature different bit widths, for example using 16-bit wide instructions and 8-bit wide data. They also mean that instruction prefetch can be performed in parallel with other activities.