Tuesday, June 23, 2009

RESEARCH_02

Bootstrap Program.
In computing, bootstrapping (from an old expression "to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.
A different use of the term bootstrapping is to use a compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.

Difference of interrupt and trap and their use.
An interrupt is generally initiated by an I/O device, and causes the CPU to stop what it's doing, save its context, jump to the appropriate interrupt service routine, complete it, restore the context, and continue execution. For example, a serial device may assert the interrupt line and then place an interrupt vector number on the data bus. The CPU uses this to get the serial device interrupt service routine, which it then executes as above.
A trap is usually initiated by the CPU hardware. When ever the trap condition occurs (on arithmetic overflow, for example), the CPU stops what it's doing, saves the context, jumps to the appropriate trap routine, completes it, restores the context, and continues execution. For example, if overflow traps are enabled, adding two very large integers would cause the overflow bit to be set AND the overflow trap service routine to be initiated.

Monitor Mode.
Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode

User Mode.
User-mode Linux (UML) enables multiple virtual Linux systems (known as guests) to run as an application within a normal Linux system (known as the host). As each guest is just a normal application running as a process in user space, this approach provides the user with a way of running multiple virtual Linux machines on a single piece of hardware, offering excellent security and safety without affecting the host environment's configuration or stability.

Device Status Table.
There is a Device Status table that shows important device status at a glance. The number of events grouped by severity can be found on the left side of this table. You can click on the 'Event Rainbow' to view the list of events for the device.


Direct Memory Access (DMA).
Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.
Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.

Difference of RAM and DRAM.
Dynamic RAM is a type of RAM that only holds its data if it is continuously accessed by special logic called a refresh circuit. Many hundreds of times each second, this circuitry reads the contents of each memory cell, whether the memory cell is being used at that time by the computer or not. Due to the way in which the cells are constructed, the reading action itself refreshes the contents of the memory. If this is not done regularly, then the DRAM will lose its contents, even if it continues to have power supplied to it. This refreshing action is why the memory is called dynamic.
All PCs use DRAM for their main system memory, instead of SRAM, even though DRAMs are slower than SRAMs and require the overhead of the refresh circuitry. It may seem weird to want to make the computer's memory out of something that can only hold a value for a fraction of a second. In fact, DRAMs are both more complicated and slower than SRAMs.
The reason that DRAMs are used is simple: they are much cheaper and take up much less space, typically 1/4 the silicon area of SRAMs or less. To build a 64 MB core memory from SRAMs would be very expensive. The overhead of the refresh circuit is tolerated in order to allow the use of large amounts of inexpensive, compact memory. The refresh circuitry itself is almost never a problem; many years of using DRAM has caused the design of these circuits to be all but perfected.
DRAMs are smaller and less expensive than SRAMs because SRAMs are made from four to six transistors (or more) per bit, DRAMs use only one, plus a capacitor. The capacitor, when energized, holds an electrical charge if the bit contains a "1" or no charge if it contains a "0". The transistor is used to read the contents of the capacitor. The problem with capacitors is that they only hold a charge for a short period of time, and then it fades away. These capacitors are tiny, so their charges fade particularly quickly. This is why the refresh circuitry is needed: to read the contents of every cell and refresh them with a fresh "charge" before the contents fade away and are lost. Refreshing is done by reading every "row" in the memory chip one row at a time; the process of reading the contents of each capacitor re-establishes the charge. For an explanation of how these "rows" are read, and thus how refresh is accomplished, refer to this section describing memory access.
DRAM is manufactured using a similar process to how processors are: a silicon substrate is etched with the patterns that make the transistors and capacitors (and support structures) that comprise each bit. DRAM costs much less than a processor because it is a series of simple, repeated structures, so there isn't the complexity of making a single chip with several million individually-located transistors. See here for details on how processors are manufactured; the principles for DRAM manufacture are similar.
There are many different kinds of specific DRAM technologies and speeds that they are available in. These have evolved over many years of using DRAM for system memory, and are discussed in more detail in other sections



STORAGE STRUCTURE
Storage Structures Inc. hereby warrants the buildings constructed at Job against defective workmanship and weathertightness for a period of 5 years from the date of substantial completion( ), Storage Structures warrants to the Building Owner the following : that the roof panels, flashing, and related items, used to fasten the roof panels and flashing to the roof structure will not allow intrusion of water from the exterior of the roof system into the building envelope, when exposed to ordinary weather conditions and ordinary wear and usage.


Main Memory
how many programs can be executed at one time and how much data can be readily available to a prograRefers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM. The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must be copied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines m. Because computers often have too little main memory to hold all the data they need, computer engineers invented a technique called swapping, in which portions of data are copied into main memory as they are needed. Swapping occurs when there is no room in memory for needed data. When one portion of data is copied into memory, an equal-sized portion is copied (swapped) out to make room. Now, most PCs come with a minimum of 32 megabytes of main memory. You can usually increase the amount of memory by inserting extra memory in the form of chips.


Magnetic Disk.
Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference


Moving Head Disk mechanism
The invention relates to a head mechanism control apparatus for magnetic recording device which ensures that the transducer head has been unloaded in a head unloading operation. A head slider having a transducer head is mounted on a head arm and is unloaded from the disk by the swinging of the head arm. The retract position is at the limit portion of the swingable range of the head arm. When the head slider is successfully unloaded to the retract position, a drive force applied to the head arm in the unload direction causes little movement of the head arm. After an unloading is performed, a position check drive force is applied to the head arm and the moving speed of the head slider is detected. Based on the moving speed, it is judged whether the head slider has been unloaded to the retract position. If the head slider has not been unloaded to the retract position, re-unloading is performed for moving the head slider to the retract position.
















Magnetic Tapes
is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.
Today, many other technologies exist that can perform the functions of magnetic tape. In many cases these technologies are replacing tape. Despite this, innovation in the technology continues and tape is still widely used.



















STORAGE HEIRARCHY
To clarify the ``guarantees'' provided at different settings of the persistence spectrum without binding the application to a specific environment or set of storage devices, MBFS implements the continuum, in part, with a logical storage hierarchy. The hierarchy is defined by N levels:

1.
LM (Local Memory storage): very high-speed volatile storage located on the machine creating the file.


2.
LCM (Loosely Coupled Memory storage): high-speed volatile storage consisting of the idle memory space available across the system.

3.
-N DA (Distributed Archival storage): slower speed stable storage space located across the system. Logically, decreasing levels of the hierarchy are characterized by stronger persistence, larger storage capacity, and slower access times. The LM level is simply locally addressable memory (whether on or off CPU). The LCM level combines the idle memory of machines throughout the system into a loosely coupled, and constantly changing, storage space. The DA level may actually consist of any number of sub-levels (denoted DA1, DA2, ..., DAn) each of increasing persistence (or capacity) and decreasing performance. LM data will be lost if the current machine crashes or loses power. LCM data has the potential to be lost if one or more machines crash or lose power. DA data is guaranteed to survive power outages and machine crashes. Replication and error correction are provided at the LCM and DA levels to improve the persistence offered by those levels.
Each level of the logical MBFS hierarchy is ultimately implemented by a physical storage device. LM is implemented using standard RAM on the local machine and LCM using the idle memory of workstations throughout the network. The DA sub-levels must be mapped to some organization of the available archival storage devices in the system. The system administrator is expected to define the mapping via a system configuration file. For example, DA-1 might be mapped to the distributed disk system while DA-2 is mapped to the distributed tape system.
Because applications are written using the logical hierarchy, they can be run in any environment, regardless of the mapping. The persistence guarantees provided by the three main levels of the hierarchy (LM, LCM, DA1) are well defined. In general, applications can use the other layers of the DA to achieve higher persistence guarantees, without knowing the exact details of the persistence guaranteed; only that it is better. For applications that want to change their storage behavior based on the characteristics of the current environment, the details of each DA's persistence guarantees, such as the expected mean-time-till-failure, can be obtained via a stat() call to the file system. Thus, MBFS makes the layering abstraction explicit while hiding the details of the devices used to implement it. Applications can control persistence with or without exact knowledge of the characteristics of the hardware used to implement it.
Once the desired persistence level has been selected, MBFS's loosely coupled memory system uses an addressing algorithm to distribute data to idle machines and employs a migration algorithm to move data off machines that change from idle to active. The details of the addressing and migration algorithms can be found in [15,14] and are also used by the archival storage levels. Finally, MBFS provides whole-file consistency via callbacks similar to Andrew[19] and a Unix security and protection model.


Caching
In computer science, a cache (pronounced /kæʃ/) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.
A cache has proven to be extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time (temporal locality). The data might or might not be located physically close to each other (spatial locality


Coherency and Consistency
Transactional Coherence and Consistency (TCC) offers a way to simplify parallel programming by executing all code in transactions. In TCC systems, transactions serve as the fundamental unit of parallel work, communication and coherence. As each transaction completes, it writes all of its newly produced state to shared memory atomically, while restarting other processors that have speculatively read from modified data. With this mechanism, a TCC-based system automatically handles data synchronization correctly, without programmer intervention. To gain the benefits of TCC, programs must be decomposed into transactions. Decomposing a program into transactions is largely a matter of performance tuning rather than correctness, and that a few basic transaction programming optimization techniques are sufficient to obtain good performance over a wide range of applications with little programmer effort.


HARDWARE PROTECTION
Hardware protection can accomplish various things, including: write protection for hard disk drives, memory protection, monitoring and trapping unauthorized system calls, etc. Again, no single tool will be foolproof and the "stronger" hardware-based protection is, the more likely it will interfere with the "normal" operation of your computer. The popular idea of write-protection (see D3) may stop viruses *spreading* to the disk that is protected, but doesn't, in itself, prevent a virus from *running*. Also, some existing hardware protection schemes can be easily bypassed, fooled, or disconnected, if the virus writer knows them well and designs a virus that is aware of the particular defense. The big problem with hardware protection is that there are few (if any) operations that a general-purpose computer can perform that are used by viruses *only*. Therefore, making a hardware protection system for such a computer typically involves deciding on some (small) set of operations that are "valid but not normally performed except by viruses", and designing the system to prevent these operations. Unfortunately, this means either designing limitations into the level of protection the hardware system provides or adding limitations to the computer's functionality by installing the hardware protection system. Much can be achieved, however, by making the hardware "smarter". This is double- edged: while it provides more security, it usually means adding a program in an EPROM to control it. This allows a virus to locate the program and to call it directly after the point that allows access. It is still possible to implement this correctly though--if this program is not in the address space of the main CPU, has its own CPU and is connected directly to the hard disk and the keyboard. As an example, there is a PC-based product called ExVira which does this and seems fairly secure, but it is a whole computer on an add-on board and is quite expensive.

Dual Mode Operation
An automatic transmission for an automotive vehicle includes a continually variable drive mechanism having one sheave assembly fixed to an intermediate shaft and the input sheave assembly supported on an input shaft, gearset driveably connected to the input shaft and an output shaft, a fixed ratio drive mechanism in the form of a chain drive providing a torque delivery path between the intermediate shaft and the carrier of the gearset, a transfer clutch for connecting and releasing the first sheave of the variable drive mechanism and input shaft, a low brake, and a reverse brake.

Input/Output Protection

input/output protection circuit, since the drain of an input protection MOS transistor is directly connected to the cathode of input protection diode and the source and gate of the input protection MOS transistor and the anode of the input protection diode are respectively grounded, an excessive voltage supplied from an external electrode is received by the cathode of the input protection diode and the drain of the input protection MOS transistor before it reaches the internal circuit of a semiconductor device, so that the input/output protection circuit is free from the increase of junction capacitance due to the pattern of the input protection diode. Moreover, when an excessive voltage is input to the device, the input protection diode breaks down prior thereto so as to reduce the voltage at which the input protection MOS transistor starts to conduct. As a result, a high-speed and certain input protection is realized. Furthermore, since the input electrode can be directly connected to the drain of the input protection MOS transistor, the input protection circuit in FIG. 1 is applicable to an output protection circuit so as to certainly protect the internal output circuit by applying the input protection circuit to an output MOS transistor

Memory Protection
Memory protection is a way to control memory access rights on a computer, and is a part of nearly every modern operating system. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug within a process from affecting other processes, or the operating system itself. Memory protection also makes a Rootkit more difficult to implement. Memory protection is a behavior that is distinct from ASLR and the NX bit.

CPU Protection
The CPU protection feature enhances the efficiency of an HP device’s CPU and Content Addressable Memory (CAM). Some denial of service attacks make use of spoofed IP addresses. If the device must create CAM entries for a large number of spoofed IP addresses over a short period of time, it requires excessive CAM utilization. Similarly, if an improperly configured host on the network sends out a large number of packets that are normally processed by the CPU (for example, DNS requests), it requires excessive CPU utilization. The CPU protection feature allows you to configure the HP device to automatically take actions when thresholds related to high CPU or CAM usage are exceeded.

NOTE: The CPU protection feature is supported on the following devices, starting with software release 07.7.00: • 9300 series Routing Switches with Standard or EP management modules The CPU protection feature is disabled by default.


Thursday, June 18, 2009

RESEARCH_01

Defferentiate Client Server System and Peer to Peer System

Client/server describes the relationship between two computer programs in which one program , the client, makes a service request from another system, the server, which fulfills the request. In a network, the client/server model provides a convenient way to efficiently interconnect programs that are distributed across different locations
Another structure for a distributes system is the peer - to peer (P2P)system model. In this model, clients and servers are not distinguished from one another; instead, all nodes within the system may act as either client or a server, depending on whether it is requesting or prividing a service.
In client server system , the server is a bottle neck; but in a peer-to peer system, services can be provided by several nodes throughout the network.



Defferentiate Symmetric Multiprocessing and Asymmetric Multiprocessing

In asymmetric multiprocessing (ASMP), the operating system typically sets aside one or more processors for its exclusive use. The remainder of the processors run user applications. As a result, the single processor running the operating system can fall behind the processors running user applications. This forces the applications to wait while the operating system catches up, which reduces the overall throughput of the system. In the ASMP model, if the processor that fails is an operating system processor, the whole computer can go down.

Symmetric multiprocessing (SMP) technology is used to get higher levels of performance. In symmetric multiprocessing, any processor can run any type of thread. The processors communicate with each other through shared memory.


Goals/Purpose of Operating System

An operating system provides an environment for the execution of programsby providing services needed by those programs.

The services programs request fall into five categories
1. Process Control
2. File System Management
3. I/O Operation
4. Interprocess Communication
5. Information Maintenance

The operating system must try to satisfy these requests in a multi-user, multi-processenvironment while managing
• Resource allocation
• Error Detection
• Protection


Advantages of a Parallel System

In terms of disproportionality, Parallel systems usually give results which fall somewhere between pure plurality/majority and pure PR systems. One advantage is that, when there are enough PR seats, small minority parties which have been unsuccessful in the plurality/majority elections can still be rewarded for their votes by winning seats in the proportional allocation. In addition, a Parallel system should, in theory, fragment the party system less than a pure PR electoral system