Thursday, July 30, 2009

Single Threaded Process




Multi-Threaded Process
  • Benefits of Multi-threaded Programming
Responsiveness - Parts of a program can continue running even if parts of it are blocked. Book points out that a multi-threaded web browser could still allow user interaction in one thread while downloading a gif in another thread…
*Resource Sharing – pros and cons here. By sharing memory or other resources (files, etc.) the threads share the same address space. (there are issues here…)
*Economy – since threads share resources, it is easier to co
ntext-switch threads than context-switching processes. This should be clear.
*Utilization of MP Architectures – there will be significant increases in performance in a multiprocessor system, where different threads may be runnin
g simultaneously (in parallel) on multiple processors.
*Of course, there’s never ‘a free lunch,’ as we will see later. (There’s always a cost…; nothing this good comes free.
  • User Thread
..Thread management done by user-level threads library
..Three primary thread libraries:
-POSIX Pthreads
-Win32 threads
  • Kernel Thread
*Supported by the Kernel
*Examples

-Windows XP/2000
-Solaris
-Linux
-Tru64 UNIX
-Mac OS X
  • Thread Library
Programmers need help and receive development help via thread libraries germane to specific development APIs..
*A thread library provides an API for creating and managing threads. Java has an extensive API for thread creation and management.
-There are two primary ways to implement thread libraries:
1. Provide thread library entirely in user space – no kern
el support
-All code and data structures for the library exist in user space.
-And, invoking a local function call to the library in user space is NOT a system call, but rather a local function call. (this is good).
2. Implement a kernel-level library supported by the OS.
-Here, code and data structures exist in kernel space.
-Unfortunately, in invoking a function call to the library, there is a system call to the kernel for support.
  • Multi Models

Many-to-one Model

Each user thread maps to one kernel thread
•Is this implementation good (concurrency vs. efficiency)?
–Good concurrency, why? (blocking syscall does not affect other threads)
–Expensive, why? (user-thread creation -> kernel-thread creation)
•How to have both
good concurrency and efficiency?


One-to-one Model

*Each user-level thread maps to kernel thread
*Examples
-Windows NT/XP/2000
-Linux

-Solaris 9 and later


Many-to-Many model

Many user threads are mapped to a smaller or equal number of kernel threads. –Why is this better than Many-to-one? (concurrency & multi-processor) –Why is this better than one-to-one? (efficiency) •Like one-to-one concurrency? –Two-level model.

Interprocess Communication


Inter-process communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the threads, and the type of data being communicated.
There are several reasons for providing an environment that allows process cooperation:
Information sharing
Computation speedup
Modularity
Convenience
IPC may also be referred to as inter-thread communication and inter-application communication.
IPC, on par with the address space concept, is the foundation for address space independence/isolation

Direct Communication

Processes must name each other explicitly:

send (P, message) – send a message to process P receive(Q, message) – receive a message from process Q Properties of communication link-Links are established automatically A link is associated with exactly one pair of communicating processes-Between each pair there exists exactly one link-The link may be unidirectional, but is usually bi-directional

Indirect Communication


•messages sent to and received from mailboxes (or ports)
–mailboxes can be viewed as objects into which messages placed by processes and from which messages can be removed by other processes
–each mailbox has a unique ID
–two processes can communicate only if they have a shared mailbox

Synchronization

"Synchrony" redirects here. For linguistic synchrony, see Synchronic analysis (linguistics). For the X-Files episode, see Synchrony (The X-Files).
For similarly named concepts, see Synchronicity (disambiguation).
Not to be confused with data synchronization.

Synchronization or synchronisation is timekeeping which requires the coordination of events to operate a system in unison. The familiar conductor of an orchestra serves to keep the orchestra in time. Systems operating with all their parts in synchrony are said to be synchronous or in sync. Some systems may be only approximately synchronized, or plesiochronous. For some applications relative offsets between events need to be determined, for others only the order of the event is important.

  • Blocking Send
A blocking send can be used with a non-blocking receive, and vice-versa, e.g.,

  • Nonblocking Send
can use any mode - synchronous, buffered, standard or ready

returns as soon as possible, that is, as soon as it has posted the send. The buffer might not be free for reuse.

-Non-blocking send has the sender send the message and continue.

  • Blocking Receive

Blocking receive has the receiver block until a message is available

  • Nonblocking Receive

Non-blocking receive has the receiver receive a valid message or null.

Buffering


•the number of messages that can reside in a link temporarily
–Zero capacity - queue length 0
»sender must wait until receiver ready to take the message
–Bounded capacity - finite length queue
»messages can be queued as long as queue not full
»otherwise sender will have to wait
–Unbounded capacity
»any number of messages can be queued - in virtual space?
»sender never delayed

  • Zero Capacity

0 messagesSender must wait for receiver (rendezvous)

  • Bounded capacity

finite length of n messagesSender must wait if link full.

  • Unbounded Capacity

infinite length Sender never waits

Procedure-Consumer Example

  • Procedure

a person who produces.
»get a message block from mayproduce
»put data item in block
»send message to mayconsume

  • Consumer

a person or thing that consumes.
»get a message from mayconsume
»consume data in block
»return empty message block to mayproduce mailbox

Thursday, July 16, 2009

  • Cooperating Processes

Independent process cannot affect or be affected by the execution of another process.-Cooperating process can affect or be affected by the execution of another process-Advantages of process cooperation-Information sharing-Computation speed-up-Modularity-Convenience

  • Interprocess Communication

Inter-process communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the threads, and the type of data being communicated.
There are several reasons for providing an environment that allows process cooperation:
Information sharing
Computation speedup
Modularity
Convenience
IPC may also be referred to as inter-thread communication and inter-application communication.
IPC, on par with the address space concept, is the foundation for address space independence/isolation.

The Concept of Process







  • Processes are among the most useful abstractions in operating systems (OS) theory and design, since they offer a unified framework to describe all the various activities of a computer as they are managed by the OS. The term process was (allegedly) first used by the designers of Multics in the '60s, to mean something more general than a job in a multiprogramming environment. Similar ideas, however, were at the heart of many independent system design efforts at the time, so it's rather difficult to point at one particular person or team as the originator of the concept.
    As is common for concepts discovered and re-discovered many times on the field before being put on theory books, several definitions have been proposed for the term process, including picturesque ones like ``the animated spirit of a program''. We'd rather draw upon the very general ideas of system theory instead, and regard a process as a representation of the state of an instance of a program in execution.
    In this definition, the word instance (also ``image'', ``activation'') refers to the fact that in a multiprogramming environment several copies of the same program (or of a piece of executable common to different programs) may be concurrently executed by different users or applications. Instead of mantaining in main memory several copies of the executable code of the program, it is often possible to store in memory just one copy of it, and mantain a description of the current status (program counter position, values of the variables, etc.) of each executing activation of it. Main memory usage is in this way maximized. This tecnique is called code reentrance, and its implementation requires both careful crafting of the reentrant routines, whose instructions constitute the permanent part of the activation, and provisions in the OS in order to mantain an activation record of the temporary part relative to each activation, such as program counter value, variable values, a pointer back to the calling routine and to its activation record,etc.
    Similarly to the way in which activation records allow distinguishing between different activations of the same piece of executable code, by mantaining information about their status, a process description allow an OS to manage, without ensuing chaos, the concurrent execution of different programs all sharing the same resources in terms of processors, memory, peripherals. Again, the keyword here is state i.e., in system theory parlance, all the information that, along with the knowledge of the current and future input values, allows predicting the evolution of a deterministic system like a program.
    What information is this? Obviously the program's executable code is a part of it, as is the associated data needed by the program (variables, I/O buffers, etc.), but this is not enough. The OS needs also to know about the execution context of the program, which includes -at the very least- the content of the processor registers and the work space in main memory, and often additional information like a priority value, whether the process is running or waiting for the completion of an I/O event, etc.
    Consider the scheme in Fig. 1, which depicts a simple process implementation scheme. There are two processes, A and B, each with its own instructions, data and context, stored in main memory. The OS maintains, also in memory, a list of pointers to the above processes, and perhaps some additional information for each of them. The content of a ``current process'' location identifies which process is currently being executed. The processor registers then contain data relevant to that particular process. Among them are the base and top adrresses of the area in memory reserved to the process: an error condition would be trapped if the program being executed tried to write in a memory word whose address is outside those bounds. This allows process protectin and prevents unwanted interferences. When the OS decides, according to a predefined policy, that time has come to suspend the current process, the whole process registers content would be saved in the process's context area, and the registers would be restored with the context of another process. Since the program counter register of the latter process would be restored too, execution would restart automatically from the previous suspension point.


  • Process State

The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:

  • Code for the program.
  • Program's static data.
  • Program's dynamic data.
  • Program's procedure call stack.
  • Contents of general purpose registers.
  • Contents of program counter (PC)
  • Contents of program status word (PSW).
  • Operating Systems resource in use.


  • Process Control Block

A Process Control Block (PCB, also called Task Control Block or Task Struct) is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system".[1]

Included information
Implementations differ, but in general a PCB will include, directly or indirectly:
The identifier of the process (a process identifier, or PID)
Register values for the process including, notably,
the Program Counter value for the process
The address space for the process
Priority (in which higher priority process gets first preference. eg., nice value on Unix operating systems)
Process accounting information, such as when the process was last run, how much CPU time it has accumulated, etc.
Pointer to the next PCB i.e. pointer to the PCB of the next process to run
I/O Information (i.e. I/O devices allocated to this process, list of opened files, etc)
During a context switch, the running process is stopped and another process is given a chance to run. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process.

  • Threads
For the form of code consisting entirely of subroutine calls, see Threaded code. For the collection of posts, see Internet forum#Thread.

A process with two threads of execution.
In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time. Examples of such languages include Python, and OCaml, because the parallel support of their runtime support is limited by the use of a central lock, called "Global Interpreter Lock" in Python, "master lock" in Ocaml. Other languages may be limited because they use threads that are user threads, which are not visible to the kernel, and thus cannot be scheduled to run concurrently. On the other hand, kernel threads, which are visible to the kernel, can run concurrently.
Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad-hoc time-slicing.



Process Scheduling


  • Scheduling Queues

Job queue – set of all processes in the system.

Ready queue – set of all processes residing in main memory, ready and waiting to execute.

Device queues – set of processes waiting for an I/O device. Process migration between the various queues. Representation of Process Scheduling




  • Schedulers

Scheduling is a key concept in computer multitasking and multiprocessing operating system design, and in real-time operating system design. In modern operating systems, there are typically many more processes running than there are CPUs available to run them. Scheduling refers to the way processes are assigned to run on the available CPUs. This assignment is carried out by software known as a scheduler.



The scheduler is concerned mainly with:
CPU utilization - to keep the CPU as busy as possible.
Throughput - number of process that complete their execution per time unit.
Turnaround - amount of time to execute a particular process.
Waiting time - amount of time a process has been waiting in the ready queue.
Response time - amount of time it takes from when a request was submitted until the first response is produced.
Fairness - Equal CPU time to each thread.
In real-time environments, such as mobile devices for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks are sent to mobile devices and managed through an administrative back end.



  • Context Switch

For other uses, see Switch (disambiguation).
A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches. A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch. What constitutes the context is determined by the processor and the operating system.



Contents
1 When to switch?
1.1 Multitasking
1.2 Interrupt handling
1.3 User and kernel mode switching
2 Context switch: steps
3 Software vs hardware context switching
4 External links

Operation on Processes

  • Process Creation

Nachos processes are formed by creating an address space, allocating physical memory for the address space, loading the contents of the executable into physical memory, initializing registers and address translation tables, and then invoking machine::Run() to start execution. Run() simply ``turns on'' the simulated MIPS machine, having it enter an infinite loop that executes instructions one at a time).
Stock Nachos assumes that only a single user program exists at a given time. Thus, when an address space is created, Nachos assumes that no one else is using physical memory and simply zeros out all of physical memory (e.g., the mainMemory character array). Nachos then reads the binary into physical memory starting at location mainMemory and initializes the translation tables to do a one-to-one mapping between virtual and physical addresses (e.g., so that any virtual address N maps directly into the physical address N). Initialization of registers consists of zeroing them all out, setting PCReg and NextPCReg to 0 and 4 respectively, and setting the stackpointer to the largest virtual address of the process (the stack grows downward towards the heap and text). Nachos assumes that execution of user-programs begins at the first instruction in the text segment (e.g., virtual address 0).
When support for multiple user processes has been added, two other Nachos routines are necessary for process switching. Whenever the current processes is suspended (e.g., preempted or put to sleep), the scheduler invokes the routine AddrSpace::SaveUserState(), in order to properly save address-space related state that the low-level thread switching routines do not know about. This becomes necessary when using virtual memory; when switching from one process to another, a new set of address translation tables needs to be loaded. The Nachos scheduler calls SaveUserState() whenever it is about to preempt one thread and switch to another. Likewise, before switching to a new thread, the Nachos scheduler invokes AddrSpace::RestoreUserState. RestoreUserState() insures that the proper address translation tables are loaded before execution resumes.

  • Process Termination

When a process finishes executing, HP-UX terminates it using the exit system call.
Circumstances might require a process to synchronize its execution with a child process. This is done with the wait system call, which has several related routines.
During the exit system call, a process enters the zombie state and must dispose of child processes. Releasing process and thread structures no longer needed by the exiting process or thread is handled by three routines -- freeproc(), freethread(), and kissofdeath().
This section will describe each process-termination routine in turn.

Thursday, July 9, 2009

Quiz #3

What are the major activities of an Operating System with regards to process management?

  • Process creation and deletion
  • Process suspension and resumption
  • Prevision of mechanism for:
  • process synchronization
  • process communication
  • deadlock handling

What are the major activities of an Operating System with regards to memory management?

  • Keep track of which parts of memory rae currently being used by whom
  • Decide which processes to load when memory space become available
  • Allocate and Deallocate memory space as needed

What are the major activities of an Operating System with regards to secondary storage management

  • Free Space Management
  • Storage Allocation
  • Disk Scheduling

What are the major activities of an Operating System with regards to file management?

  • File creation and deletion
  • Directory creation and deletion
  • Support of primitives for manipulating files and directories
  • Mapping files onto secondary storage
  • File backup on the stable(nonvolatile storage)

What is the purpose of the command interpreter?

  • It serves as the interface between the user and the Operating System.
  • User friendly, mouse based windows environment in the Macintosh and Microsoft windows.
  • In MS-DOS and UNIX, command are typed on a keyboard and displayed on a screen or printing terminal with the Enter or Return key indicating that a command is complete and ready to be executed.

Tuesday, July 7, 2009

System Generation

  • (SYStem GENeration) The installation of a new or revised operating system. It includes selecting the appropriate utility programs and identifying the peripheral devices and storage capacities of the system the operating system will be controlling.
  • A group of interdependent items that interact regularly to perform a task.
  • An established or organized procedure; or a method.

System Boot

The typical computer system boots over and over again with no problems, starting the computer's operating system (OS) and identifying its hardware and software components that all work together to provide the user with the complete computing experience. But what happens between the time that the user powers up the computer and when the GUI icons appear on the desktop?
In order for a computer to successfully boot, its BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence.
When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction, which is the instruction to run the power-on self test (POST), in a predetermined memory address. POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other hardware devices, such as the keyboard and mouse, to ensure they are functioning properly.
Once the POST has determined that all components are functioning properly and the CPU has successfully initialized, the BIOS looks for an OS to load.
The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in most PCs, the OS loads from the C drive on the hard drive even though the BIOS has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of drives that the CMOS looks to in order to locate the OS is called the boot sequence, which can be changed by altering the CMOS setup. Looking to the appropriate boot drive, the BIOS will first encounter the boot record, which tells it where to find the beginning of the OS and the subsequent program file that will initialize the OS.
Once the OS initializes, the BIOS copies its files into memory and the OS basically takes over control of the boot process. Now in control, the OS performs another inventory of the system's memory and memory availability (which the BIOS already checked) and loads the device drivers that it needs to control the peripheral devices, such as a printer, scanner, optical drive, mouse and keyboard. This is the final stage in the boot process, after which the user can access the system’s applications to perform tasks.

Virtual Machine

In computer science, a virtual machine (VM) is a software implementation of a machine (computer) that executes programs like a real machine.


A virtual machine is a type of computer application used to create a virtual environment, which is referred to as virtualization. Virtualization allows the user to see the infrastructure of a network through a process of aggregation. Virtualization may also be used to run multiple operating systems at the same time. Through the help of a virtual machine, the user can operate software located on the computer platform.
There are several different types of virtual machines. Most commonly, the term is used to refer to hardware virtual machine software, also known as a hypervisor or virtual machine monitor. This type of virtual machine software makes it possible to perform multiple identical executions on one computer. In turn, each of these executions runs an operating system. This allows multiple applications to be run on different operating systems, even those they were not originally intended for.
Through the use of the hardware virtual machine software, the user has a seemingly private machine with fully functional hardware that is separate from other users. Hardware virtual machine software also makes it possible for users to boot and restart their machines quickly, since tasks such as hardware initialization are not necessary.
Virtual machine can also refer to application virtual machine software. With this software, the application is isolated from the computer being used. This software is intended to be used on a number of computer platforms. This makes it unnecessary to create separate versions of the same software for different operating systems and computers. Java Virtual Machine is a very well known example of an application virtual machine.
A virtual machine can also be a virtual environment, which is also known as a virtual private server. A virtual environment is used for running programs at the user level. Therefore, it is used solely for applications and not for drivers or operating system kernels.
A virtual machine may also be a group of computers that work together to create a more powerful machine. In this type of virtual machine, the software makes it possible for one environment to be formed throughout several computers. This makes it appear to the end user as if he or she is using a single computer, when there are actually numerous computers at work.

  • Implementation

Virtual machine implementation and dynamic languages
I'm looking for references to virtual machine implementations and dynamic languages.
I seem to recall something recently about what the Java VM lacks wrt dynamic languages and what other implementations (Parrot?) do that enable dynamic languages.
What would a Universal VM look like? Is such a thing possible?
I'm not googling the right keywords, I'm not finding what I'm looking for.

  • Benefits
  1. Designed for virtual machines running on Windows Server 2008 and Microsoft Hyper-V ServerHyper-V is the next-generation hypervisor-based virtualization platform from Microsoft, which is designed to offer high performance, enhanced security, high availability, scalability, and many other improvements. VMM is designed to take full advantage of these foundational benefits through a powerful yet easy-to-use console that streamlines many of the tasks necessary to manage virtualized infrastructure. Even better, administrators can manage their traditional physical servers right alongside their virtual resources through one unified console.
  2. Support for Microsoft Virtual Server and VMware ESXWith this release, VMM now manages VMware ESX virtualized infrastructure in conjunction with the Virtual Center product. Now administrators running multiple virtualization platforms can rely on one tool to manage virtually everything. With its compatibility with VMware VI3 (through Virtual Center), VMM now supports features such as VMotion and can also provide VMM-specific features like Intelligent Placement to VMware servers.
  3. Performance and Resource Optimization (PRO) Performance and Resource Optimization (PRO) enables the dynamic management of virtual resources though Management Packs that are PRO enabled. Utilizing the deep monitoring capabilities of System Center Operations Manager 2007, PRO enables administrators to establish remedial actions for VMM to execute if poor performance or pending hardware failures are identified in hardware, operating systems, or applications. As an open and extensible platform, PRO encourages partners to design custom management packs that promote compatibility of their products and solutions with PRO’s powerful management capabilities.
  4. Maximize datacenter resources through consolidation A typical physical server in the datacenter operates at only 5 to 15 percent CPU capacity. VMM can assess and then consolidate suitable server workloads onto virtual machine host infrastructure, thus freeing up physical resources for repurposing or hardware retirement. Through physical server consolidation, continued datacenter growth is less constrained by space, electrical, and cooling requirements.
  5. Machine conversions are a snap! Converting a physical machine to a virtual one can be a daunting undertaking—slow, problematic, and typically requiring you to halt the physical server. But thanks to the enhanced P2V conversion in VMM, P2V conversions will become routine. Similarly, VMM also provides a straightforward wizard that can convert VMware virtual machines to VHDs through an easy and speedy Virtual-to-Virtual (V2V) transfer process.
  6. Quick provisioning of new machines In response to new server requests, a truly agile IT department delivers new servers to its business clients anywhere in the network infrastructure with a very quick turnaround. VMM enables this agility by providing IT administrators with the ability to deploy virtual machines in a fraction of the time it would take to deploy a physical server. Through one console, VMM allows administrators to manage and monitor virtual machines and hosts to ensure they are meeting the needs of the corresponding business groups.Quick provisioning of new machines In response to new server requests, a truly agile IT department delivers new servers to its business clients anywhere in the network infrastructure with a very quick turnaround. VMM enables this agility by providing IT administrators with the ability to deploy virtual machines in a fraction of the time it would take to deploy a physical server. Through one console, VMM allows administrators to manage and monitor virtual machines and hosts to ensure they are meeting the needs of the corresponding business groups.
    Quick provisioning of new machines In response to new server requests, a truly agile IT department delivers new servers to its business clients anywhere in the network infrastructure with a very quick turnaround. VMM enables this agility by providing IT administrators with the ability to deploy virtual machines in a fraction of the time it would take to deploy a physical server. Through one console, VMM allows administrators to manage and monitor virtual machines and hosts to ensure they are meeting the needs of the corresponding business groups.vQuick provisioning of new machines In response to new server requests, a truly agile IT department delivers new servers to its business clients anywhere in the network infrastructure with a very quick turnaround. VMM enables this agility by providing IT administrators with the ability to deploy virtual machines in a fraction of the time it would take to deploy a physical server. Through one console, VMM allows administrators to manage and monitor virtual machines and hosts to ensure they are meeting the needs of the corresponding business groups.
    Quick provisioning of new machines In response to new server requests, a truly agile IT department delivers new servers to its business clients anywhere in the network infrastructure with a very quick turnaround. VMM enables this agility by providing IT administrators with the ability to deploy virtual machines in a fraction of the time it would take to deploy a physical server. Through one console, VMM allows administrators to manage and monitor virtual machines and hosts to ensure they are meeting the needs of the corresponding business groups.
  7. Intelligent Placement minimizes virtual machine guesswork in deployment VMM does extensive data analysis on a number of factors before recommending which physical server should host a given virtual workload. This is especially critical when administrators are determining how to place several virtual workloads on the same host machine. With access to historical data—provided by Operations Manager 2007—the Intelligent Placement process is able to factor in past performance characteristics to ensure the best possible match between the virtual machine and its host hardware.
  8. Delegated virtual machine management for Development and Test Virtual infrastructures are commonly used in Test and Development environments, where there is constant provisioning and tear down of virtual machines for testing purposes. This latest version of VMM features a thoroughly reworked and improved self-service Web portal, through which administrators can delegate this provisioning role to authorized users while maintaining precise control over the management of virtual machines.
  9. The library helps keep virtual machine components organized To keep a data center’s virtual house in order, VMM provides a centralized library to store various virtual machine “building blocks”—off-line machines and other virtualization components. With the library’s easy-to-use structured format, IT administrators can quickly find and reuse specific components, thus remaining highly productive and responsive to new server requests and modifications.
  10. Windows PowerShell provides rich management and scripting environment The entire VMM application is built on the command-line and scripting environment, Windows PowerShell. This version of VMM adds additional PowerShell commandlets and “view script” controls, which allow administrators to exploit customizing or automating operations at an unprecedented level.
  • Examples

Zones are not virtual machines, but an example of "operating-system virtualization". This includes other "virtual environments" (also called "virtual servers") such as Virtuozzo, FreeBSD Jails, Linux-VServer, chroot jail, and OpenVZ. These provide some form of encapsulation of processes within an operating system. These technologies have the advantages of being more resource-efficient than full virtualization and having better observability into multiple guests simultaneously; the disadvantage is that, generally, they can only run a single operating system and a single version/patch level of that operating system - so, for example, they cannot be used to run two applications, one of which only supports a newer OS version and the other only supporting an older OS version on the same hardware. However, Sun Microsystems has enhanced Solaris Zones to allow some zones to behave like Solaris 8 or Solaris 9 systems by adding a system call translator.

Thursday, July 2, 2009

SYSTEM STRUCTURE

  • Simple Structure
These are structures with low degree of departmentalisation and a wide span of control. The authority is largely centralised in a single person with very little formalisation. It is also called 'flat structure'.
It usually has only two or three vertical levels, a flexible set of employees, and generally one person in whom the power of decision-making is invested. This simple structure is most widely practiced in small business settings where manager and owner happens to be the same person. Its advantage lies in its simplicity. This makes it responsive, fast, accountable and easy to maintain. However, it becomes grossly inadequate as and when the organisation grows in size. Such a simple structure is becoming popular becauseof its flexibility, responsiveness and high degree of adaptability to change. Simple Structure: These are structures with low degree of departmentalisation and a wide span of control. The authority is largely centralised in a single person with very little formalisation. It is also called 'flat structure'.
It usually has only two or three vertical levels, a flexible set of employees, and generally one person in whom the power of decision-making is invested. This simple structure is most widely practiced in small business settings where manager and owner happens to be the same person. Its advantage lies in its simplicity. This makes it responsive, fast, accountable and easy to maintain. However, it becomes grossly inadequate as and when the organisation grows in size. Such a simple structure is becoming popular becauseof its flexibility, responsiveness and high degree of adaptability to change.


  • Layered Approach

Yesterday, in passing, I mentioned that I’d like to see Windows evolve into an operating system where the OS, installed applications and user data were each contained in separate layers. I didn’t go into any detail on this because, a) I thought I’d already covered it, and b) I didn’t have the space to go into it. Since several of you have asked me what I meant by that let me take a few minutes to explain what I mean by this layered approach.

The way that Windows currently works in terms of user data and installed applications is a mess. It’s not a deliberate mess but more a natural result of opting to provide ongoing support for outdated ideas. Windows Vista is a modern OS that still clings desperately onto Windows 95 paradigms that belong to an era where users had a lot less data than they do now.

So what’s the problem? Well, the main problem with Windows Vista stems from the fact that it still assumes that a PC is fitted with a single hard drive, and as a result of this flawed thinking wants to cram everything onto that single drive. This is valid Windows 95 thinking, when we measured drive capacities in MB and buying a hard drive really gave your credit card a punching. Times have changed, capacities have increased unbelievably, a dollar can buy you over 7GB of storage and you can pick up 500GB of storage for around $65. The reason that most PCs are still sold with a single drive is simply because Windows still makes it difficult for the average user to effectively make use of that second drive. In my experience this single drive approach is responsible for more, and more catastrophic, data loss than any bit of malware. Another flawed aspect of this Windows 95 thinking is that we are still interacting with the PC via a file system structure, and this has now become mindbogglingly complex where mistakes happen quickly and easily.

OK, so what’s different about the layered approach? Well, under my system you’d see three distinct layers.

  • OS layer
  • Application layer
  • Data layer

Under this regime the OS layer would be at the core of the system and ideally applications and user data should not interfere or tamper with this. In the real world I’m certain that security applications would need access to this layer but on the whole tampering with this layer should be frowned upon. This layer should be self contained in that it can be backed up, repaired or wiped and reinstalled totally separate to the other two layers. Barring activation hurdles it should be easy to transfer this OS layer from one system to another, be that a physical one or a virtual one.

Note: Under this layers model any installed drivers would form part of a sub-set of the OS layer.

The application layer would house the apps that the user installs. This layer should again be self contained in that it can be backed up, wiped or restored totally separate to the other two layers. All applications and data relating to applications would be stored within this layer, with each app compartmentalized. Uninstalling an application should remove all traces of that application (apart from user data). Again, other than for activation/license management, it should be possible to take this entire layer and move it to another physical or virtual PC.

Then there’s the data layer. By now you’re getting the idea behind this layers business. The data layer would contain user data. I can think of several different approaches that this would take but ideally it should be flexible enough to accommodate different kinds of users - users who want to store data based on the app used to generate it, based on project, chronologically … etc. Ideally this data layer should be easy to back up and restore and should ideally be stored on a drive separate to that the OS and apps is on. It might also be possible for the data to be mirrored between two drives to provide redundancy. After all, hard drives are large enough to cater for this level of redundancy for most users, and if you had three drives fitted, you could have redundancy between two data drives.

The core idea behind the layered approach is that apart from some products being subject to activation or some other license management, neither the OS, apps or data should be tied together or tied to a single system. Also, each layer is isolated from each other. I’m not suggesting that direct access to the file system shouldn’t be available, it’s just that is shouldn’t be necessary to have to delve directly into the file system for simple file-related operations, especially those related to user data.

Now, like I said yesterday, this is blue-sky thinking and I don’t expect this to happen any time soon. We’re certainly not going to see any such radical changes in Windows 7.

SYSTEM CALLS

  • Process Control

Process control is a statistics and engineering discipline that deals with architectures, mechanisms, and algorithms for controlling the output of a specific process. See also control theory.
For example, heating up the temperature in a room is a process that has the specific, desired outcome to reach and maintain a defined temperature (e.g. 20°C), kept constant over time. Here, the temperature is the controlled variable. At the same time, it is the input variable since it is measured by a thermometer and used to decide whether to heat or not to heat. The desired temperature (20°C) is the setpoint. The state of the heater (e.g. the setting of the valve allowing hot water to flow through it) is called the manipulated variable since it is subject to control actions.
A commonly used control device called a programmable logic controller, or a PLC, is used to read a set of digital and analog inputs, apply a set of logic statements, and generate a set of analog and digital outputs. Using the example in the previous paragraph, the room temperature would be an input to the PLC. The logical statements would compare the setpoint to the input temperature and determine whether more or less heating was necessary to keep the temperature constant. A PLC output would then either open or close the hot water valve, an incremental amount, depending on whether more or less hot water was needed. Larger more complex systems can be controlled by a Distributed Control System (DCS) or SCADA system.


In practice, process control systems can be characterized as one or more of the following forms:


Discrete – Found in many manufacturing, motion and packaging applications. Robotic assembly, such as that found in automotive production, can be characterized as discrete process control. Most discrete manufacturing involves the production of discrete pieces of product, such as metal stamping.


Batch – Some applications require that specific quantities of raw materials be combined in specific ways for particular durations to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).


Continuous – Often, a physical system is represented though variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes, in manufacturing, are used to produce very large quantities of product per year(millions to billions of pounds).
Applications having elements of discrete, batch and continuous process control are often called hybrid applications.

  • File Management

File management is a necessary evil associated with computers. It's really not all that much different than rummaging through a heap of papers on your desk except you don't get paper cuts. In either situation, a bit of organization and using the tools available can make the task easier.

  • Device Management

Device Management is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset OEM or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.
For companies, a Device Management system means better control and safety as well as increased efficiency, decreasing the possibility for device downtime. As the number of smart devices increases in many companies today, there is a demand for managing, controlling and updating these devices in an effective way. As mobile devices have become true computers over the years, they also force organizations to manage them properly. Without proper management and security policies, mobile devices pose threat to security: they contain lots of information, while they may easily get into wrong hands. Normally an employee would need to visit the IT / Telecom department in order to do an update on the device. With a Device Management system, that is no longer the issue. Updates can easily be done "over the air". The content on a lost or stolen device can also easily be removed by "wipe" operations. In that way sensitive documents on a lost or a stolen device do not arrive in the hands of others.

  • Information Maintenance

OPERATING SYSTEM SERVICES

Program execution – system capability to load a program into memory and to run it.

I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O.

File-system manipulation – program capability to read, write, create, and delete files.

Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.

Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs.


Additional functions exist not for helping the user, but rather for ensuring efficient system operations.
•Resource allocation – allocating resources to multiple users or multiple jobs running at the same time.
•Preemptable, nonpreemptable resources
•Deadlock prevention and detection models
•Accounting – keep track of and record which users use how much and what kinds of computer resources for account billing or for accumulating usage statistics.
•Protection – ensuring that all access to system resources is controlled.

SOFTWARE COMPONENTS

  • Operating System Process Management

is an integral part of any modern day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronisation among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process.


  • Main Memory Management

Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.
Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.
Garbage collection is the automated allocation, and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory management, the explicit allocation and deallocation of computer memory resources.

  • File Management

A file manager or file browser is a computer program that provides a user interface to work with file systems. The most common operations used are create, open, edit, view, print, play, rename, move, copy, delete, attributes, properties, search/find, and permissions. Files are typically displayed in a hierarchy. Some file managers contain features inspired by web browsers, including forward and back navigational buttons.
Some file managers provide network connectivity such as FTP, NFS, SMB or WebDAV. This is achieved either by allowing the user to browse for a server, connect to it and access the server's file system like a local file system, or by providing its own full client implementations for file server protocols.


  • I/O System Management

Input/output device information management system for multi-computer system
United States Patent 6526441

In a multi-computer system having a plurality of computers, an input/output device configuration definition table and an input/output device configuration reference table are adapted to be collectively managed. A configuration management program manages the configuration definition of all input/output devices of a plurality of computers by using the input/output device configuration definition table, and generates a changed data file when an input/output device configuration is changed. Dynamic system alteration is effected by changing the contents of the input/output device configuration reference table stored in a shared memory, in accordance with the changed data file. The input/output device configuration definition table and the input/output device configuration reference table each have an input/output device information part and an input/output device connection information part arranged in a matrix form to allow addition/deletion of an input/output device and a computer.

  • Secondary Storage Management

Secondary storage management is a classical feature of database management systems. It is usually supported through a set of mechanisms. These include index management, data clustering, data buffering, access path selection and query optimization.
None of these is visible to the user: they are simply performance features. However, they are so critical in terms of performance that their absence will keep the system from performing some tasks (simply because they take too much time). The important point is that they be invisible. The application programmer should not have to write code to maintain indices, to allocate disk storage, or to move data between disk and main memory. Thus, there should be a clear independence between the logical and the physical level of the system.

  • Protection System

With the new series of ThinkPads IBM introduced the Active Protection System (APS) in 2003. The APS is a protection system for the ThinkPad's internal harddrive. A sensor inside the ThinkPad recognizes when the notebook is accelerated. A software applet then is triggered to park the harddisk. This way the risk of data loss in case of when the notebook is dropped is significantly reduced since the read/write head of the harddrive is parked and hence can't crash onto the platter when the notebook drops onto the floor.
The hardware sensor is capable of not only recognizing acceleration of the notebook, but also (to a certain degree) of its whole orientation in space, relative to gravity's axis. Furthermore, having the actual control put into software, its functionality is extendable and it gives chance to implement features like the "ignore minor shocks" feature which is present in the Windows based control applet. (This feature prevents the harddrive from parking in case of minor regular shocks such as occur when in a train or car.)
The measurements are physically performed by an Analog Devices ADXL320 accelerometer chip, managed by the embedded controller.

  • Command-Interpreter System
Command interpreter system in an I/O controller
United States Patent 5931920

A hardware accelerated I/O data processing engine to execute a minimum number of types of I/O data processing commands in response to a stimulus from a host computer. The data processing engine, referred to as a command interpreter includes a command queue, a logic unit, a multiple purpose interface, at least one memory, and a controlling state machine, that each operate in concert with each other and without software control. The types of commands executed by the command interpreter can include, but are not limited to, an Initialize, Copy, DMA Read, DMA Write, Cumulative Exclusive OR, Verify, Compare, and ECC Check. The execution of commands that specify a source data location and a destination data location are characterized by a plurality of reads to an internal cache from the source data location for each bulk write from the internal cache to the destination data location. The locations of the data operated on by the command interpreter include a local I/O controller memory and a non-local I/O controller memory accessible to the command interpreter by way of an I/O bus.