Memory is an important part of embedded systems. The cost and performance of an embedded system heavily depends on the kind of memory devices it utilizes. In this section we will discuss about "Memory Classification", "Memory Technologies" and "Memory Management".
(1) Memory Classification
Memory Devices can be classified based on following characteristics
(b) Persitance of Storage
(c) Storage Density & Cost
(d) Storage Media
(f) Power Consumption
Memory devices can provide Random Access, Serial Access or Block Access. In a Random Access memory, each word in memory can be directly accessed by specifying the address of this memory word. RAM, SDRAMs, and NOR Flash are examples of Random Access Memories. In a Serial Access Memory, all the previous words (previous to the word being accessed) need to be accessed, before accessing a desired word. I2C PROM and SPI PROM are examples of Serial Access Memories. In Block Access Memories, entire memory is sub-divided in to small blocks (generally of the order of a KByte) of memory. Each block can be randomly accessed, and each word in a given block can be serially accessed. Hard Disks and NAND flash employ a similar mechanism. Word access time for a RAM (Random Access Memory) is independent of the word location. This is desirable of high speed application making frequent access to the memory.
Persistence of Storage
Memory devices can provide Volatile storage or a non-Volatile stroage. In a non-Volatile storage, the memory contents are preserved even after power shut down. Whereas a Volatile memory looses its contents, after power shut down. Non-Volatile storage is needed for storing application code, and re-usable data. However volatile memory can be used for all temporary storages. RAM, SDRAM are examples of volatile memory. Hard Disks, Flash (NOR & NAND) Memories, SD-MMC, and ROM are example of non-Volatile storages.
Memory Device may employ electronic (in terms of transistors or electron states) storage, magnetic storage or optical storage. RAM, SDRAM are examples of electronic storage. Hard Disks are example of magnetic storage. CDs (Compact Discs) are example of optical storage. Old Computers also employed magnetic storage (magnetic storages are still common in some consumer electronics products).
Storage Density & Cost
Storage Density (number of bits which can be stored per unit area) is generally a good meausre of cost. Dense memories (like SDRAM) are much cheaper than their counterparts (like SRAM).
Low Power Consumption is highly desirable in Battery Powered Embedded Systems. Such systems generally employ memory devices which can operate at low (and ultra low) Voltage levels. Mobile SDRAMs are example of low power memories.
(2) Memory TechnologiesRAM
RAM stands for Random Access Memory. RAMs are simplest and most common form of data storage. RAMs are volatile. The figure below shows typical Data, Address and Control Signals on a RAM. The number of words which can be stored in a RAM are proportional (exponential of two) to the number of address buses available. This severely restricts the storage capacity of RAMs (A 32 GB RAM will require 36 Address lines) because designing circuit boards with more signal lines directly adds to the complexity and cost.
DPRAM (Dual Port RAM)
DPRAM are static RAMs with two I/O ports. These two ports access the same memory locations - hence DPRAMs are generally used to implement Shared Memories in Dual Processor Systems. The operations performed on a single port are identical to any RAM. There are some common problems associated with usage of DPRAM:
(a) Possible of data corruption when both ports are trying to access the same memory location - Most DPRAM devices provide interlocked memory accesses to avoid this problem.
(b) Data Coherency when Cache scheme is being used by the processor accessing DPRAM - This happens because any data modifications (in the DPRAM) by one processor are unknown to the Cache controller of other processor. In order to avoid such issues, Shared memories are not mapped to the Cacheable space. In case processor's cache configuration is not flexible enough (to define the shared memory space as non-cacheable), the cache needs to be flushed before performing any reads from this memory space.
Dynamic RAMs use a different storage technique for data storage. A Static RAM has four transistors per memory cell, whereas Dynamic RAMs have only one transistor per memory cell. The DRAMs use capactive storage. Since the capacitor can loose charge, these memories need to be refreshed periodically. This makes DRAMs more complex (because we need to have extra control) and power consuming. However, DRAMs have a very high storage density (as compared to static RAMs) and are much cheaper in cost. DRAMs are generally accessed in terms of rows, columns and pages which significantly reduces the number of address buses (another advantage over RAM). Generally you need a SDRAM controller (which manages different SDRAM commands and Address translation) to access a SDRAM. Most of the modern processors come with an on-chip SDRAM controller.
OTP- EPROM, UV-EPROM and EEPROM
EPROMs (Electrically Programmable writable Read Only Memory) are non-volatile memories. Contents of ROM can be randomly accessed - but generally the word RAM is used to refer to only the volatile random access memories. The operating voltage for writing in to the EPROMs is much higher than the operating voltage. Hence you can write in to a PROM in-circuit (which signifies ROM). You need special programming stations (which have write mechanism) to write in to the EPROMs.
OTP-EPROMs are One Time Programmable. Contents of these memories can not be changed, once written. UV-EPROM are UV erasable EPROMs. Exposure of memory cells, to UV light erases the exisiting contents of these memories and these can be re-programmed after that. EEPROM are Eletricaly Erasable EPROMs. These can be erased electrically (generally on the same programming station where you write in to them). The write cycles (number of times you can erase and re-write) for UV-EPROM and EEPROM is fairly limited. Erasable PROMs use either FLOTOX (Floating gate Tunnel Oxide) or FAMOS (Floating gate Avalanche MOS) technology.
Flash (or NOR-Flash to be more accurate) are quite similar to EEPROM in usage and can be considered in the class of EEPROM (since it is electically erasable). However there are a few differences. Firstly, the flash devices are in-circuit programmable. Secondly, these are much cheaper as compared to the conventional EEPROMs. These days (NOR) Flash are widely used for storing the boot code.
These memories are more dense and cheaper than NOR Flash. However these memories are block accessible, and can not be used for code execution. These devices are mostly used for Data Storage (since it is cheaper than NOR flash). However some systems use them for storing the boot codes (these can be used with external hardware or with built-in NAND boot logic in the processor).
SD-MMC cards provide a cheaper mean of mass storage. These memory cards can provide storage capacity of the order of GBytes. These cards are very compact and can be used with portable systems. Most modern hand-held devices requiring mass storage (e.g. still and video cameras) use Memory cards for storage.
Hard Discs are Optical Memory devices. These devices are bulky and they require another bulky hardware (disk reader) for reading these memories. These memories are generally used for Mass storage. Hence they memories do not exist in smaller and portable systems. However these memories are being used in embedded systems which require bulk storage without any size constraint.
(3) Memory Management
Size and the Speed (access time) of the computer memories are inversally proportional. Increasing the size means reduction in speed. Infact most of the memories are made up of smaller memory blocks (generally 4 KB) in order to improve the speed. Cost of the memory is also highly dependent on the memory speed. In order to achieve a good performance it is desirable that code and data must reside in a high speed memory. However using a high speed memory for all the code and data in a reasonably large system may be practically impossible. Even in a smaller system, using high speed memory as the only storage device can raise the system cost exponentially.
Most Systems employ a heirarichal memory system. They employ a small and fast (and expensive) memory device to store the frequently used code and data, whereas less frequently used data is stored in a big low speed (cheper) memory device. In a complex system there can be multiple level (with speed and cost) of memory heierarchy).
Cache controller is a hardware (Generally built in to the processor) which can dynamically move the currently being used code and data from a higher level (slower) memory to the lower level (zero level or cache) memory. The in coming data or code replaces the old code or data (which is currently not being used) in the cache memory. The data (or code) movement is hidden to the user.
Cache memories are based on the principle of locality in space and time. There are different types of cache mechanism and replacement mechanism.
Low cost micro-processor generally do not have an in-built cache controller. But on these devices it may be still desirable to keep the currently being used code (or data) in internal memory and replace it with a new code section when it is not being used. This can be done using "Software Overlays". Either code or data overlays can be used. In this section we will only discuss about code overlays (you can draw similar analogy for data overlays).
(a) Each code section which is mapped to an overlay has a run space and a live space. Live space is a space in the external (or high level) memory, where this code section resides, at non-runtime. Run space is a space in the internal (or lower level) memory, where this code resides during execution.
(b) Overlay Manager is a piece of software which dynamically moves the code sections from live space to run space (whenever a function from given overlay section is called).
(c) Linker and Loader tools generate overlay symbols corresponding to the code sections which are mapped to overlays. The overlay symbols are also supplemented by the information about run-space and live-space of the given overlay. This information is used by the overlay manager to move the overlays dynamically.
(d) You can have multiple overlays in your system. The overlay sections for a given overlay, have different live-space but the same run-space.
(a) Firstly you need to make sure that your code generation tools (linker and loader) provide some minimum support (in terms of overlays symbols) needed for the overlays.
(b) Secondly you need to identify mutual exclusive code sections in your application. Mutually exclusive means that only one of these code section could be used at any given point of time. Also make sure that switching time between these code sections (i.e. the average time after which the processor will require some code from a different section) is quite high. Else, software overlays will degrade the performance (rather than improving it).
(c) Make sure that you have enough run-space to accomodate the largest overlay section.
(d) While implementing the code overlays, you can still choose to keep some code sections (which are not likely to improve the performance if used as overlays) out of overlays (these sections will have same live-space and run-space).
Data overlays are analogous to code overlays. But there are rarely used.
Virtual Memory Mechanism allows users to store there data in a Hard Disk, whereas still use it as if it was available in RAM. The application makes accesses to the data in virtual address space (which is mapped to RAM), whereas the actuall data physically resides in Hard Disk (and is moved to RAM for access).
In virtual mode, memory is divided into pages usually 4096 bytes long (see page size). These pages may reside in any available RAM location that can be addressed in virtual mode. The high order bits in the memory address register are an index into page-mapping tables at specific starting locations in memory and the table entries contain the starting real addresses of the corresponding pages. The low order bits in the address register are an offset of 0 up to 4,095 (0 to the page size - 1) into the page ultimately referenced by resolving all the table references of page locations.
The distinct advantages of Virtual Memory Mechanism are:
(a) User can access (in virtual space) more RAM space than what actually exists in the system.
(b) In a multi-tasking application, each task can have its own independent virtual address space (called discrete address space).
(c) Applications can treat data as if it is stored in contiguous memory (in virtual address space), whereas it may be in dis contiguous locations (in actual memory).
Cache Memory and Virtual Memory are quite similar in concept and they provide similar benefits. However these schemes different significantly in terms of implementation:
* Cache control is fully implemented in hardware. Virtual Memory Management is done by software (Operating System) with some minimum support from Hardware
* With cache memory in use, user still makes accesses to the actual physical memory (and cache is hidden to the user). However it is reverse with Virtual Memory. User makes accesses to the virtual memory and the actual physical memory is hidden to the user.
User CommentsNo Posts found !
Login to Post a Comment.