Platform Technologies Module 2
Platform Technologies Module 2
Platform Technologies Module 2
Module 2 of 4
PLATFORM TECHNOLOGIES
Brueckner B. Aswigue
This module have three lessons: operating system structures, process concepts,
and threads. The operating system structures discuss the general structure of computer
systems. It may be a good idea to review the basic concepts of machine organization and
assembly language programming. Operating system structures elaborate the following
components, such as: concepts of memory, CPU, registers, I/O, interrupts, instructions,
and the instruction execution cycle. Since the operating system is the interface between
the hardware and user programs, a good understanding of operating systems requires
an understanding of both hardware and programs.
The overview of the process concepts will discuss at lesson two. This will
introduce the concepts of a process and concurrent execution where at the very heart
of modern operating systems. It will show the process execution in a modern time-
sharing system and introduce the notion of a thread (lightweight process) and
interprocess communication (IPC).
The last lesson of this module is threads. The threads will introduces many
concepts associated with multithreaded computer systems and covers how to use Java
to create and manipulate threads. We have found it especially useful to discuss how a
Java thread maps to the thread model of the host operating system.
If you want to know more interesting facts about this module, visit the following:
Operating system. http://www.wiley.com/college and clicking “Who’s my rep?” and
Operating System. http://www.os-book.com. The ebook and powerpoint presentation
will be given for you as additional references to elaborate some of the topics.
The number of hours allotted for this module shall be 8 hours. You are expected
to finish the module in three weeks.
LEARNING OUTCOMES
PRE-TEST
The following questions cover general areas of this module. You may not know
the answers to all questions, but please attempt to answer them without asking
others or referring to books.
Choose the best answer for each question and write the letter of your choice
after the number.
1. The user interface take several for such as.
a. Graphical user interface and external execution
b. Graphical user interface and command-line interface
c. Graphical user interface and file system
d. All of the above
1
2. A running program requires Input/Output (I/O)?
a. True
b. False
5. This needs to be detecting and correcting errors constantly under error detection.
a. Resource allocation
b. User interface
c. Communication
d. Operating System
6. used as record keeping for billing purposes and accumulating usage statistics.
a. Math
b. Statistic
c. Accounting
d. Information Technology
8. An interface that the user employs a mouse-based windows and menu system.
a. Graphical user interface
b. Mouse user interface
c. Line interface
d. On-line interface
Objectives:
At the end of the lesson, you should be able to:
1. describe thoroughly the services an operating system provides to users,
processes, and other systems; and,
2. discuss comprehensively the various ways of structuring an operating system;
and,
3. explain accurately how operating systems are installed and customized and
how they boot.
2
Let’s Engage.
An operating system provides the environment within which programs are
executed. Internally, operating systems vary greatly in their makeup, since they are
organized along many different lines. The design of a new operating system is a major
task. It is important that the goals of the system be well defined before the design begins.
These goals form the basis for choices among various algorithms and strategies.
We can view an operating system from several vantage points. One view focuses
on the services that the system provides; another, on the interface that it makes
available to users and programmers; a third, on its components and their
interconnections.
The more complex the operating system is, the more it is expected to do on behalf
of its users. Although its main concern is the execution of user programs, it also needs
to take care of various system tasks that are better left outside the kernel itself. A system
therefore consists of a collection of processes: operating system processes executing
system code and user processes executing user code. Potentially, all these processes
can execute concurrently, with the CPU (or CPUs) multiplexed among them. By
switching the CPU between processes, the operating system can make the computer
more productive. In lesson 2, you will read about what processes are and how they work.
Operating systems are those programs that interface the machine with the
applications programs like Microsoft Office, Google and programming languages
software. The main function of these systems is to dynamically allocate the shared
system resources to the executing programs. As such, research in this area is clearly
concerned with the management and scheduling of memory, processes, and other
devices. But the interface with adjacent levels continues to shift with time. Functions
that were originally part of the operating system have migrated to the hardware. On the
other side, programmed functions extraneous to the problems being solved by the
application programs are included in the operating system.
To ease this chore, a set of system programs is provided. Some of these programs
are referred to as utilities, or library programs. These implement frequently used
functions that assist in program creation, the management of files, and the control of
I/O devices.
3
OPERATING SYSTEMS STRUCTURE
Operating-System Services
Figure 1 shows the view of the various operating-system services and how they
interrelate.
One set of operating system services provides functions that are helpful to the user.
• User interface. Almost all operating systems have a user interface (UI). This
interface can take several forms. One is a command-line interface (CLI), which uses
text commands and a method for entering them (say, a keyboard for typing in
commands in a specific format with specific options). Another is a batch interface,
in which commands and directives to control those commands are entered into files,
and those files are executed. Most commonly, a graphical user interface (GUI) is
used. Here, the interface is a window system with a pointing device to direct I/O,
choose from menus, and make selections and a keyboard to enter text. Some systems
provide two or all three of these variations.
• Program execution. The system must be able to load a program into memory and
to run that program. The program must be able to end its execution, either normally
or abnormally (indicating error).
• I/O operations. A running program may require I/O, which may involve a file or an
I/O device. For specific devices, special functions may be desired (such as recording
to a CD or DVD drive or blanking a display screen). For efficiency and protection,
users usually cannot control I/O devices directly. Therefore, the operating system
must provide a means to do I/O.
4
• File-system manipulation. The file system is of particular interest. Obviously,
programs need to read and write files and directories. They also need to create and
delete them by name, search for a given file, and list file information. Finally, some
operating systems include permissions management to allow or deny access to files
or directories based on file ownership. Many operating systems provide a variety of
file systems, sometimes to allow personal choice and sometimes to provide specific
features or performance characteristics.
• Error detection. The operating system needs to be detecting and correcting errors
constantly. Errors may occur in the CPU and memory hardware (such as a memory
error or a power failure), in I/O devices (such as a parity error on disk, a connection
failure on a network, or lack of paper in the printer), and in the user program (such
as an arithmetic overflow, an attempt to access an illegal memory location, or a too-
great use of CPU time). For each type of error, the operating system should take the
appropriate action to ensure correct and consistent computing. Sometimes, it has
no choice but to halt the system. At other times, it might terminate an error-causing
process or return an error code to a process for the process to detect and possibly
correct.
Another set of operating system functions exists not for helping the user but
rather for ensuring the efficient operation of the system itself. Systems with multiple
users can gain efficiency by sharing the computer resources among the users.
• Resource allocation. When there are multiple users or multiple jobs running at the
same time, resources must be allocated to each of them. The operating system
manages many different types of resources. Some (such as CPU cycles, main
memory, and file storage) may have special allocation code, whereas others (such as
I/O devices) may have much more general request and release code. For instance,
in determining how best to use the CPU, operating systems have CPU-scheduling
routines that take into account the speed of the CPU, the jobs that must be executed,
the number of registers available, and other factors. There may also be routines to
allocate printers, USB storage drives, and other peripheral devices.
• Accounting. We want to keep track of which users use how much and what kinds
of computer resources. This record keeping may be used for accounting (so that
users can be billed) or simply for accumulating usage statistics. Usage statistics may
be a valuable tool for researchers who wish to reconfigure the system to improve
computing services.
5
A View of Operating System Services
6
C. Touchscreen Interfaces
• Touchscreen devices require new interfaces
o Mouse not possible or not desired
o Actions and selection based on gestures
o Virtual keyboard for text entry
• Voice commands.
D. Choice of Interface
• The choice of whether to use a command-line or GUI interface is mostly one
of personal preference.
• System administrators who manage computers and power users who have
deep knowledge of a system frequently use the command-line interface. For
them, it is more efficient, giving them faster access to the activities they need
to perform. Indeed, on some systems, only a subset of system functions is
available via the GUI, leaving the less common tasks to those who are
command-line knowledgeable. Further, commandline interfaces usually make
repetitive tasks easier, in part because they have their own programmability.
For example, if a frequent task requires a set of command-line steps, those
steps can be recorded into a file, and that file can be run just like a program.
The program is not compiled into executable code but rather is interpreted by
the command-line interface. These shell scripts are very common on systems
that are command-line oriented, such as UNIX and Linux.
• In contrast, most Windows users are happy to use the Windows GUI
environment and almost never use the MS-DOS shell interface. The various
changes undergone by the Macintosh operating systems provide a nice study
in contrast. Historically, Mac OS has not provided a command-line interface,
always requiring its users to interface with the operating system using its GUI.
7
Figure 4. The Mac OS X GUI (Operating System. http://www.os-book.com).
System Calls
Let’s first use an example to illustrate how system calls are used: writing a
simple program to read data from one file and copy them to another file.
1. The first input that the program will need is the names of the two files: the input
file and the output file. These names can be specified in many ways, depending
on the operating-system design.
a. One approach is for the program to ask the user for the names. In an
interactive system, this approach will require a sequence of system calls,
first to write a prompting message on the screen and then to read from the
keyboard the characters that define the two files.
b. On mouse-based and icon-based systems, a menu of file names is usually
displayed in a window. The user can then use the mouse to select the
source name, and a window can be opened for the destination name to be
specified. This sequence requires many I/O system calls.
2. Once the two file names have been obtained, the program must open the input
file and create the output file. Each of these operations requires another system
call. Possible error conditions for each operation can require additional system
calls.
3. When the program tries to open the input file, for example, it may find that
there is no file of that name or that the file is protected against access.
5. If the input file exists, then we must create a new output file. We may find that
there is already an output file with the same name. This situation may cause the
program to abort (a system call), or we may delete the existing file (another system
call) and create a new one (yet another system call).
a. Another option, in an interactive system, is to ask the user (via a sequence
of system calls to output the prompting message and to read the response
from the terminal) whether to replace the existing file or to abort the
program.
8
6. When both files are set up, we enter a loop that reads from the input file (a system
call) and writes to the output file (another system call). Each read and write must
return status information regarding various possible error conditions. On input,
the program may find that the end of the file has been reached or that there was
a hardware failure in the read (such as a parity error).
7. The write operation may encounter various errors, depending on the output
device (for example, no more disk space).
8. Finally, after the entire file is copied, the program may close both files (another
system call), write a message to the console or window (more system calls), and
finally terminate normally (the final system call). This system-call sequence is
shown in Figure 5 where the system call sequence to copy the contents of one file
to another file.
9
The caller need know nothing about how the system call is implemented
or what it does during execution. Rather, the caller need only obey the Application
Programming Interface (API) and understand what the operating system will do
as a result of the execution of that system call. Thus, most of the details of the
operating-system interface are hidden from the programmer by the Application
Programming Interface (API) and are managed by the run-time support library.
The relationship between an Application Programming Interface (API), the
system-call interface, and the operating system is shown in Figure 6, which
illustrates how the operating system handles a user application invoking the
open() system call.
Figure 6. The handling of a user application invoking the open() system call
(Operating System. http://www.os-book.com).
10
Figure 7. Passing of parameters as a table (Operating System. http://www.os-
book.com).
• Often, more information is required than simply identity of desired system call
o Exact type and amount of information vary according to OS and call
1. Process control
• create process, terminate process
• end, abort
• load, execute
• get process attributes, set process attributes
• wait for time
• wait event, signal event
• allocate and free memory
• Dump memory if error
• Debugger for determining bugs, single step execution
• Locks for managing access to shared data between processes
The figure 8 below shows the standard C library provides a portion of the
system-call interface for many versions of UNIX and Linux. As an example, let’s
assume a C program invokes the printf() statement. The C library intercepts this
call and invokes the necessary system call (or calls) in the operating system—in
this instance, the write() system call. The C library takes the value returned by
write() and passes it back to the user program.
11
Figure 8. System Call Interface using C program invoking printf() library call,
which calls write() system call (Operating System. http://www.os-book.com).
There are so many facets of and variations in process and job control that we
next use two examples—one involving a single-tasking system and the other a
multitasking system—to clarify these concepts. Figure 9 shows the MS-DOS operating
system is an example of a single-tasking system. It has a command interpreter that is
invoked when the computer is started.
• Single-tasking
• Shell invoked when system
booted
• Simple method to run program
• No process created
• Single memory space
• Loads program into memory,
overwriting all but the kernel
• Program exit -> shell reloaded
Figure 9. MS-DOS execution. (a) At system startup. (b) Running a program (Operating
System. http://www.os-book.com).
12
• Unix variant
• Multitasking
• User login -> invoke user’s choice of
shell
• Shell executes fork() system call to
create process
o Executes exec() to load program
into process
o Shell waits for process to
terminate or continues with
user commands
• Process exits with:
o code = 0 – no error
o code > 0 – error code
2. File management
• create file, delete file
• open, close file
• read, write, reposition
• get and set file attributes
We first need to be able to create() and delete() files. Either system call
requires the name of the file and perhaps some of the file’s attributes. Once the
file is created, we need to open() it and to use it. We may also read(), write(), or
reposition() (rewind or skip to the end of the file, for example).
Finally, we need to close() the file, indicating that we are no longer using
it. We may need these same sets of operations for directories if we have a directory
structure for organizing files in the file system. In addition, for either files or
directories, we need to be able to determine the values of various attributes and
perhaps to reset them if necessary.
File attributes include the file name, file type, protection codes, accounting
information, and so on. At least two system calls, get file attributes() and set file
attributes(), are required for this function. Some operating systems provide many
more calls, such as calls for file move() and copy(). Others might provide an
Application Programming Interface (API) that performs those operations using
code and other system calls, and others might provide system programs to
perform those tasks. If the system programs are callable by other programs, then
each can be considered an Application Programming Interface (API) by other
system programs.
3. Device management
• request device, release device
• read, write, reposition
• get device attributes, set device attributes
• logically attach or detach devices
13
The various resources controlled by the operating system can be thought
of as devices. Some of these devices are physical devices (for example, disk
drives), while others can be thought of as abstract or virtual devices (for example,
files). A system with multiple users may require us to first request() a device, to
ensure exclusive use of it. After we are finished with the device, we release() it.
These functions are similar to the open() and close() system The hazard then is
the potential for device contention and perhaps deadlock.
Once the device has been requested (and allocated to us), we can read(),
write(), and (possibly) reposition() the device, just as we can with files. In fact, the
similarity between I/O devices and files is so great that many operating systems,
including UNIX, merge the two into a combined file–device structure. In this case,
a set of system calls is used on both files and devices. Sometimes, I/O devices
are identified by special file names, directory placement, or file attributes.
The user interface can also make files and devices appear to be similar,
even though the underlying system calls are dissimilar. This is another example
of the many design decisions that go into building an operating system and user
interface.
4. Information maintenance
• get time or date, set time or date
• get system data, set system data
• get and set process, file, or device attributes
Many system calls exist simply for the purpose of transferring information
between the user program and the operating system. For example, most systems
have a system call to return the current time() and date(). Other system calls may
return information about the system, such as the number of current users, the
version number of the operating system, the amount of free memory or disk
space, and so on.
5. Communications
• create, delete communication connection
• send, receive messages if message passing model to host name or process
name
• From client to server
• Shared-memory model create and gain access to memory regions
• transfer status information
• attach and detach remote devices
14
There are two common models of interprocess communication: the
message passing model and the shared-memory model. In the message-passing
model, the communicating processes exchange messages with one another to
transfer information. Messages can be exchanged between the processes either
directly or indirectly through a common mailbox. Before communication can take
place, a connection must be opened. The name of the other communicator must
be known, be it another process on the same system or a process on another
computer connected by a communications network. Each computer in a network
has a host name by which it is commonly known. A host also has a network
identifier, such as an IP address. Similarly, each process has a process name,
and this name is translated into an identifier by which the operating system can
refer to the process. The get hostid() and get processid() system calls do this
translation. The identifiers are then passed to the general purpose open() and
close() calls provided by the file system or to specific open connection() and close
connection() system calls, depending on the system’s model of communication.
The recipient process usually must give its permission for communication to take
place with an accept connection() call. Most processes that will be receiving
connections are special-purpose daemons, which are system programs provided
for that purpose. They execute a wait for connection() call and are awakened when
a connection is made.
The source of the communication, known as the client, and the receiving
daemon, known as a server, then exchange messages by using read message()
and write message() system calls. The close connection() call terminates the
communication.
6. Protection
• Control access to resources
• Get and set permissions
• Allow and deny user access
System Programs
o File manipulation
o Status information sometimes stored in a File modification
o Programming language support
o Program loading and execution
o Communications
o Background services
o Application programs
15
Most users’ view of the operation system is defined by system programs,
not the actual system calls
16
o The separation of policy from mechanism is a very important principle, it allows
maximum flexibility if policy decisions are to be changed later (example – timer)
o Specifying and designing an OS is highly creative task of software engineering
Implementation
o Much variation
o Early OSes in assembly language
o Then system programming languages like Algol, PL/1
o Now C, C++
o Actually usually a mix of languages
o Lowest levels in assembly
o Main body in C
o Systems programs in C, C++, scripting languages like PERL, Python, shell
scripts
o More high-level language easier to port to other hardware
o But slower
o Emulation can allow an OS to run on non-native hardware
o Figure 11 shows that Microsoft Disk Operating System (MS-DOS) was originally
designed and implemented by a few people who had no idea that it would
become so popular. It was written to provide the most functionality in the least
space, so it was not carefully divided into modules.
o MS-DOS – written to provide the most functionality in the least space
o Not divided into modules
o Although MS-DOS has some structure, its interfaces and levels of
functionality are not well separated
17
Non Simple Structure -- UNIX
18
Layered Approach
o The operating system is divided into a number of layers (levels), each built on top of
lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the
user interface.
o With modularity, layers are selected such that each uses functions (operations) and
services of only lower-level layers
o The major difficulty with the layered approach involves appropriately defining the
various layers. Because a layer can use only lower-level layers, careful planning is
necessary. For example, the device driver for the backing store (disk space used by
virtual-memory algorithms) must be at a lower level than the memory-management
routines, because memory management requires the ability to use the backing store.
messages messages
microkernel
hardware
19
o More secure
o Detriments:
o Performance overhead of user space to kernel space communication
Modules
o Many modern operating systems implement loadable kernel modules
o Uses object-oriented approach
o Each core component is separate
o Each talks to the others over known interfaces
o Each is loadable as needed within the kernel
o Overall, similar to layers but with more flexible
o Linux, Solaris, etc
Hybrid Systems
o Most modern operating systems are actually not one pure model
o Hybrid combines multiple approaches to address performance, security,
usability needs
o Linux and Solaris kernels in kernel address space, so monolithic, plus
modular for dynamic loading of functionality
o Windows mostly monolithic, plus microkernel for different subsystem
personalities
o Apple Mac OS X hybrid, layered, Aqua UI plus Cocoa programming environment
o Below is kernel consisting of Mach microkernel and BSD Unix parts, plus
I/O kit and dynamically loadable modules (called kernel extensions)
20
passing; and thread scheduling. The BSD component provides a BSD
command-line interface, support for networking and file systems, and an
implementation of POSIX Application Programming Interface (API)s,
including Pthreads.
o In addition to Mach and BSD, the kernel environment provides an I/O kit
for development of device drivers and dynamically loadable modules
(which Mac OS X refers to as kernel extensions). The BSD application
environment can make use of BSD facilities directly.
kernel environment
BSD
Mach
B. iOS
o Apple mobile OS for iPhone, iPad
o Structured on Mac OS X, added functionality
o Does not run OS X applications natively
o Also runs on different CPU architecture (ARM vs. Intel)
o Cocoa Touch Objective-C Application Programming Interface (API) for
developing apps
o Media services layer for graphics, audio, video
o Core services provides cloud computing, databases
o Core operating system, based on Mac OS X kernel
21
C. Android
Application Framework
surface media
Dalvik
manager framework
virtual machine
webkit libc
Performance Tuning
o Improve performance by removing bottlenecks
o OS must provide means of computing and displaying measures of system
behavior
o For example, “top” program or Windows Task Manager
22
Figure 19. Windows Task Manager (Operating System. http://www.os-
book.com).
DTrace
• DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on
production systems
o Probes fire when code is executed within a provider, capturing state data
and sending it to consumers of those probes
• Example of following XEventsQueued system call move from libc library to kernel
and back
Figure 20. Solaris 10 dtrace follows a system call within the kernel (Operating
System. http://www.os-book.com).
• DTrace code to record amount of time each process with UserID 101 is in running
mode (on CPU) in nanoseconds
23
Operating System Generation
1. Operating systems are designed to run on any of a class of machines; the system
must be configured for each specific computer site
2. SYSGEN program obtains information concerning the specific configuration of
the hardware system
o Used to build system-specific compiled kernel or system-tuned
o Can general more efficient code than one general kernel
System Boot
• The procedure of starting a computer by loading the kernel is known as booting
the system.
• When power initialized on system, execution starts at a fixed memory location
o Firmware ROM used to hold initial boot code
o All forms of ROM are also known as firmware
• Operating system must be made available to hardware so hardware can start it
o Small piece of code – bootstrap loader, stored in Read-on-Memory (ROM)
or erasable programmable read-only memory (EEPROM) locates the
kernel, loads it into memory, and starts it execution
o Sometimes two-step process where boot block at fixed location loaded by
ROM code, which loads bootstrap loader from disk
o Common bootstrap loader, GRUB (example of an open-source bootstrap
program for Linux systems), allows selection of kernel from multiple
disks, versions, kernel options
• Kernel loads and system is then running
SUMMARY
Operating systems provide a number of services. At the lowest level, system calls
allow a running program to make requests from the operating system directly. At a
higher level, the command interpreter or shell provides a mechanism for a user to issue
a request without writing a program. Commands may come from files during batch-
mode execution or directly from a terminal or desktop GUI when in an interactive or
time-shared mode. System programs are provided to satisfy many common user
requests.
The types of requests vary according to level. The system-call level must provide
the basic functions, such as process control and file and device manipulation. Higher-
level requests, satisfied by the command interpreter or system programs, are translated
into a sequence of system calls. System services can be classified into several categories:
program control, status requests, and I/O requests. Program errors can be considered
implicit requests for service.
The design of a new operating system is a major task. It is important that the
goals of the system be well defined before the design begins. The type of system desired
is the foundation for choices among various algorithms and strategies that will be
needed.
24
while it is executing. Generally, operating systems adopt a hybrid approach that
combines several different types of structures.
Debugging process and kernel failures can be accomplished through the use of
debuggers and other tools that analyze core dumps. Tools such as DTrace analyze
production systems to find bottlenecks and understand other system behavior.
The bootstrap can execute the operating system directly if the operating system
is also in the firmware, or it can complete a sequence in which it loads progressively
smarter programs from firmware and disk until the operating system itself is loaded into
memory and executed.
It may be a good idea to review the basic concepts of machine organization and
assembly language programming. You should be comfortable with the concepts of
memory, CPU, registers, I/O, interrupts, instructions, and the instruction execution
cycle. Since the operating system is the interface between the hardware and user
programs, a good understanding of operating systems requires an understanding of
both hardware and programs.
Program Activity.
1. Based on the Figure below, make this program using JAVA. Be sure to
include all necessary error checking, including ensuring that the source file
exists. Once you have correctly designed and tested the program, if you
used a system that supports it, run the program using a utility that traces
system calls. Pass your program either written or computerized in a clean
short coupon bond.
25
LESSON 2 - PROCESS
Objectives
At the end of the lesson, you should be able to:
1. introduce the notion of a process—a program in execution, which forms the
basis of all computation;
2. describe the various features of processes, including scheduling, creation, and
termination;
3. explore interprocess communication using shared memory and message
passing; and,
4. describe communication in client–server systems.
LET’S ENGAGE
Early computers allowed only one program to be executed at a time. This program
had complete control of the system and had access to all the system’s resources. In
contrast, contemporary computer systems allow multiple programs to be loaded into
memory and executed concurrently. This evolution required firmer control and more
compartmentalization of the various programs; and these needs resulted in the notion
of a process, which is a program in execution. A process is the unit of work in a modern
time-sharing system.
1. Process Concept
A question that arises in discussing operating systems involves what to call all
the Central Processing Unit (CPU) activities. A batch system executes jobs, whereas a
time-shared system has user programs or tasks. Even on a single-user system, a user
26
may be able to run several programs at one time: a word processor, a Web browser, and
an e-mail package. And even if a user can execute only one program at a time, such as
on an embedded device that does not support multitasking, the operating system may
need to support its own internal programmed activities, such as memory management.
In many respects, all these activities are similar, so we call all of them processes.
The terms job and process are used almost interchangeably in this text. Although
we personally prefer the term process, much of operating-system theory and terminology
was developed during a time when the major activity of operating systems was job
processing. It would be misleading to avoid the use of commonly accepted terms that
include the word job (such as job scheduling) simply because process has superseded
job.
27
o terminated: The process has finished execution
▪ The states that they represent are found on all systems, however. Certain
operating systems also more finely delineate process states. It is
important to realize that only one process can be running on any
processor at any instant. Many processes may be ready and waiting
28
Figure 24. Diagram showing CPU switch from process to process
(Operating System. http://www.os-book.com).
1.3 Threads
▪ So far, process has a single thread of execution
▪ Consider having multiple program counters per process
o Multiple locations can execute at once
▪ Multiple threads of control -> threads
▪ Must then have storage for thread details, multiple program counters in
PCB
2. Process Scheduling
▪ Maximize CPU use, quickly switch processes onto CPU for time sharing
▪ Process scheduler selects among available processes for next execution on CPU
▪ Maintains scheduling queues of processes
o Job queue – set of all processes in the system
o Ready queue – set of all processes residing in main memory, ready and
waiting to execute
o Device queues – set of processes waiting for an I/O device
o Processes migrate among the various queues
29
Figure 25. Ready Queue and Various I/O Device Queues (Operating System.
http://www.os-book.com).
2.2 Schedulers
▪ Short-term scheduler (or CPU scheduler) – selects which process
should be executed next and allocates CPU
o Sometimes the only scheduler in a system
o Short-term scheduler is invoked frequently (milliseconds)
(must be fast)
▪ Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue
o Long-term scheduler is invoked infrequently (seconds,
minutes) (may be slow)
o The long-term scheduler controls the degree of
multiprogramming
30
▪ Processes can be described as either:
o I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
o CPU-bound process – spends more time doing computations;
few very long CPU bursts
▪ Long-term scheduler strives for good process mix
31
3. Operations on Processes
• The processes in most systems can execute concurrently, and they maybe
created and deleted dynamically. Thus, these systems must provide a
mechanism for process creation and termination.
o Figure 28 shows the recursively tracing parent processes all the way to the
init process. On UNIX and Linux systems, we can obtain a listing of
processes by using the ps command. For example, the command ps –el
will list complete information for all processes currently active in the
system. It is easy to construct a process tree similar to the one shown in
Figure 28.
init
pid = 1
emacs tcsch
ps
pid = 9204 pid = 4005
pid = 9298
Address space
▪ Child duplicate of parent
▪ Child has a program loaded into it
UNIX examples
▪ fork() system call creates new process
▪ exec() system call used after a fork() to replace the process’
memory space with a new program
32
Figure 29 Process creation using the fork() system call (Operating
System. http://www.os-book.com).
Figure 30. Creating a separate process using the UNIX fork() system call.
33
• Process executes last statement and then asks the operating system to delete it
using the exit() system call.
o Returns status data from child to parent (via wait())
o Process’ resources are deallocated by operating system
• Parent may terminate the execution of children processes using the abort()
system call. Some reasons for doing so:
o Child has exceeded allocated resources
o Task assigned to child is no longer required
o The parent is exiting and the operating systems does not allow a child to
continue if its parent terminates
• Some operating systems do not allow child to exists if its parent has terminated.
If a process terminates, then all its children must also be terminated.
o cascading termination. All children, grandchildren, etc. are terminated.
o The termination is initiated by the operating system.
• The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the
terminated process
pid = wait(&status);
• If no parent waiting (did not invoke wait()) process is a zombie
• If parent terminated without invoking wait , process is an orphan
4. Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including
sharing data
• Reasons for cooperating processes:
o Information sharing. Since several users may be interested in the same
piece of information (for instance, a shared file), we must provide an
environment to allow concurrent access to such information.
o Computation speedup. If we want a particular task to run faster, we must
break it into subtasks, each of which will be executing in parallel with the
others. Notice that such a speedup can be achieved only if the computer
has multiple processing cores.
o Modularity. We may want to construct the system in a modular fashion,
dividing the system functions into separate processes or threads was
elaborated in the module 1.
o Convenience. Even an individual user may work on many tasks at the
same time. For instance, a user may be editing, listening to music, and
compiling in parallel.
34
• Cooperating processes need interprocess communication (IPC) –
mechanism that will allow them to exchange data and information
• Two models of IPC
o Shared memory - a region of memory that is shared by cooperating
processes is established. Processes can then exchange information by
reading and writing data to the shared region.
o Message passing - communication takes place by means of messages
exchanged between the cooperating processes.
Producer-Consumer Problem
▪ Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process
▪ unbounded-buffer places no practical limit on the size of
the buffer
▪ bounded-buffer assumes that there is a fixed buffer size
▪ Bounded-Buffer – Producer
item next_produced;
35
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
36
▪ receive(Q, message) – receive a message from process
Q
▪ Properties of communication link
▪ Links are established automatically
▪ A link is associated with exactly one pair of
communicating processes
▪ Between each pair there exists exactly one link
▪ The link may be unidirectional, but is usually bi-
directional
Indirect Communication
▪ Messages are directed and received from mailboxes (also
referred to as ports)
▪ Each mailbox has a unique id
▪ Processes can communicate only if they share a
mailbox
▪ Properties of communication link
▪ Link established only if processes share a common
mailbox
▪ A link may be associated with many processes
▪ Each pair of processes may share several
communication links
▪ Link may be unidirectional or bi-directional
▪ Operations
▪ create a new mailbox (port)
▪ send and receive messages through mailbox
▪ destroy a mailbox
▪ Primitives are defined as:
▪ send(A, message) – send a message to mailbox A
▪ receive(A, message) – receive a message from mailbox
A
▪ Mailbox sharing
▪ P1, P2, and P3 share mailbox A
▪ P1, sends; P2 and P3 receive
▪ Who gets the message?
▪ Solutions
▪ Allow a link to be associated with at most two
processes
▪ Allow only one process at a time to execute a receive
operation
▪ Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
▪ 2. Synchronization
▪ Message passing may be either blocking or non-blocking
▪ Blocking is considered synchronous
▪ Blocking send -- the sender is blocked until the
message is received
▪ Blocking receive -- the receiver is blocked until a
message is available
▪ Non-blocking is considered asynchronous
▪ Non-blocking send -- the sender sends the message
and continue
▪ Non-blocking receive -- the receiver receives:
▪ A valid message, or
▪ Null message
▪ Different combinations possible
▪ If both send and receive are blocking, we have a
rendezvous
▪ Producer-consumer becomes trivial
message next_produced;
37
while (true) {
/* produce an item in next produced */
send(next_produced);
}
3. Buffering
▪ Queue of messages attached to the link.
▪ implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Socket Communication
• Sockets in Java
• Three types of sockets
o Connection-oriented (TCP)
o Connectionless (UDP)
o MulticastSocket class– data can be sent to multiple recipients
o Consider this “Date” server:
38
Figure 33. Date server.
• Problem: The RPC scheme requires a similar binding of the client and the
server port, but how does a client know the port numbers on the server?
Neither system has full information about the other, because they do not
share memory.
39
sent to that port until the process terminates (or the server crashes).
This method requires the extra overhead of the initial request but is
more flexible than the first approach.
•
Pipes
• Acts as a conduit allowing two processes to communicate
• Issues:
1. Is communication unidirectional or bidirectional?
2. In the case of two-way communication, is it half or full-duplex?
3. Must there exist a relationship (i.e., parent-child) between the
communicating processes?
4. Can the pipes be used over a network?
• Ordinary pipes – cannot be accessed from outside the process that created
it. Typically, a parent process creates a pipe and uses it to communicate
with a child process that it created.
• Named pipes – can be accessed without a parent-child relationship.
1. Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-
consumer style
• Producer writes to one end (the write-end of the pipe)
• Consumer reads from the other end (the read-end of the pipe)
• Ordinary pipes are therefore unidirectional
• Require parent-child relationship between communicating
processes
• Windows calls these anonymous pipes
• See Unix and Windows code samples in textbook
2. Named Pipes
• Named Pipes are more powerful than ordinary pipes
• Communication is bidirectional
• No parent-child relationship is necessary between the
communicating processes
• Several processes can use the named pipe for communication
40
• Provided on both UNIX and Windows systems (examples are Dir
command in the Windows and ls command in the UNIX)
Summary
A process is a program in execution. As a process executes, it changes state. The
state of a process is defined by that process’s current activity. Each process may be in
one of the following states: new, ready, running, waiting, or terminated. Each process
is represented in the operating system by its own process control block (PCB).
A process, when it is not executing, is placed in some waiting queue. There are
two major classes of queues in an operating system: I/O request queues and the ready
queue. The ready queue contains all the processes that are ready to execute and are
waiting for the CPU. Each process is represented by a PCB.
The operating system must select processes from various scheduling queues.
Long-term (job) scheduling is the selection of processes that will be allowed to contend
for the CPU. Normally, long-term scheduling is heavily influenced by resource-allocation
considerations, especially memory management. Short-term (CPU) scheduling is the
selection of one process from the ready queue.
Operating systems must provide a mechanism for parent processes to create new
child processes. The parent may wait for its children to terminate before proceeding, or
the parent and children may execute concurrently. There are several reasons for
allowing concurrent execution: information sharing, computation speedup, modularity,
and convenience.
The responsibility for providing communication may rest with the operating
system itself. These two schemes are not mutually exclusive and can be used
simultaneously within a single operating system.
41
Direction: Read the passage carefully and plan what you will write. Place your answer
in the pad paper (yellow or white) to be submitted. Please answer the questions wisely
and must be supported based on the modules, books, or internet (attached the website of
your references). Each questions has 10 points each. The Essay rubrics have a
correspond points that will guide you in your essay.
Features 9-10 points 7-8 points 4-6 points 1-3 points
Expert Accomplished Capable Beginner
Understanding Writing shows Writing shows a Writing shows Writing shows
strong clear adequate little
understanding understanding understanding understanding
Quality of Piece was Piece was written Piece had little Piece had no style
Writing written in an in an interesting style
extraordinary style
style
Gives no new
Very informative Somewhat Gives some new information and
and well- informative and information but very poorly
organized organized poorly organized organized
Grammar, Virtually no Few spelling and A number of So many spelling,
Usage & spelling, punctuation spelling, punctuation and
Mechanics punctuation or errors, minor punctuation or grammatical
grammatical grammatical grammatical errors that it
errors errors errors interferes with the
meaning
QUESTIONS:
1. Provide two programming examples in which multithreading provides better
performance than a single-threaded solution.
2. What are two differences between user-level threads and kernel-level threads?
Under what circumstances is one type better than the other?
3. Describe the actions taken by a kernel to context-switch between kernel level
threads.
4. What resources are used when a thread is created? How do they differ from those
used when a process is created?
LESSON 3: Threads
Objectives:
At the end of the lesson, you should be able to:
1. Identify thoroughly the notion of a thread—a fundamental unit of CPU
utilization that forms the basis of multithreaded computer systems;
2. Analyze comprehensively the APIs for the Pthreads, Windows, and Java thread
libraries;
3. distinguish accurately strategies that provide implicit threading;
4. inspect thoroughly issues related to multithreaded programming; and
5. inspect thoroughly operating system support for threads in Windows and Linux.
Let’s Engage.
The basic idea is that the several components in any complex system will
perform particular subfunctions that contribute to the overall function.
—THE SCIENCES OF THE ARTIFICIAL, Herbert Simon
42
resource ownership and another relating to execution. This distinction has led to the
development, in many operating systems, of a construct known as the thread.
The process model introduced in lesson 2 of this module assumed that a process
was an executing program with a single thread of control. Virtually all modern operating
systems, however, provide features enabling a process to contain multiple threads of
control. In this lesson, we introduce many concepts associated with multithreaded
computer systems, including a discussion of the APIs for the Pthreads, Windows, and
Java thread libraries. We look at a number of issues related to multithreaded
programming and its effect on the design of operating systems. Finally, we explore how
the Windows and Linux operating systems support threads at the kernel level.
1. Overview
1. Motivation
o Most modern applications are multithreaded
o Figure 35 shows the single-thread and multithreaded processes were
threads run within application
o A web browser might have one thread display images or text while another
thread retrieves data from the network, for example. A word processor may
have a thread for displaying graphics, another thread for responding to
keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background. Applications can also be designed
to leverage processing capabilities on multicore systems.
43
o Multiple tasks with the application can be implemented by separate
threads
▪ Update display
▪ Fetch data
▪ Spell checking
▪ Answer a network request
o Figure 36 shows the threads also play a vital role in remote procedure call
(RPC) systems. Recall that RPCs allow interprocess communication by
providing a communication mechanism similar to ordinary function or
procedure calls. Typically, RPC servers are multithreaded. When a server
receives a message, it services the message using a separate thread. This
allows the server to service several concurrent requests.
o If the web-server process is multithreaded, the server will create a separate
thread that listens for client requests. When a request is made, rather than
creating another process, the server creates a new thread to service the
request and resume listening for additional requests.
o Process creation is heavy-weight while thread creation is light-weight
o Can simplify code, increase efficiency
o Finally, most operating-system kernels are now multithreaded. Several
threads operate in the kernel, and each thread performs a specific task,
such as managing devices, managing memory, or interrupt handling. For
example, Solaris has a set of threads in the kernel specifically for interrupt
handling; Linux uses a kernel thread for managing the amount of free
memory in the system.
2. Benefit
• Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
• Resource Sharing – threads share resources of process, easier than shared
memory or message passing
• Economy – cheaper than process creation, thread switching lower
overhead than context switching
• Scalability – process can take advantage of multiprocessor architectures
2. Multicore Programming
1. Program Challenges.
• Multicore or multiprocessor systems putting pressure on programmers,
challenges for multicore systems include:
▪ Identifying tasks/Dividing Activities. This involves examining
applications to find areas that can be divided into separate,
concurrent tasks. Ideally, tasks are independent of one another and
thus can run in parallel on individual cores.
▪ Balance. While identifying tasks that can run in parallel,
programmers must also ensure that the tasks perform equal work
44
of equal value. In some instances, a certain task may not contribute
as much value to the overall process as other tasks. Using a
separate execution core to run that task may not be worth the cost.
▪ Data splitting. Just as applications are divided into separate tasks,
the data accessed and manipulated by the tasks must be divided to
run on separate cores.
▪ Data dependency. The data accessed by the tasks must be
examined for dependencies between two or more tasks. When one
task depends on data from another, programmers must ensure that
the execution of the tasks is synchronized to accommodate the data
dependency.
▪ Testing and debugging. When a program is running in parallel on
multiple cores, many different execution paths are possible. Testing
and debugging such concurrent programs is inherently more
difficult than testing and debugging single-threaded applications.
2. Types of Parallelism
• implies a system can perform more than one task simultaneously
▪ Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
▪ Task parallelism – distributing threads across cores, each thread
performing unique operation
o Concurrency supports more than one task making progress
▪ Single processor / core, scheduler providing concurrency
Amdahl’s Law
• Identifies performance gains from adding additional cores to an application
that has both serial and parallel components
45
• S is serial portion
• N processing cores
3. Multithreading Models
Support for threads may be provided either at the user level, for user threads, or by the
kernel, for kernel threads. User threads are supported above the kernel and are managed
without kernel support, whereas kernel threads are supported and managed directly by
the operating system. Virtually all contemporary operating systems—including
Windows, Linux, Mac OS X, and Solaris— support kernel threads. Ultimately, a
relationship must exist between user threads and kernel threads. There are three
common ways of establishing such a relationship: the many-to-one model, the one-to-
one model, and the many-to-many model.
1.Many-to-One
• Many user-level threads mapped to single kernel thread
• One thread blocking causes all to block
• Multiple threads may not run in parallel on muticore system because only one
may be in kernel at a time
• Few systems currently use this model
• Examples: Solaris Green Threads, GNU Portable Threads
2. One-to-One
• Each user-level thread maps to kernel thread
• Creating a user-level thread creates a kernel thread
• More concurrency than many-to-one
46
• Number of threads per process sometimes restricted due to overhead
• Examples : Windows, Linux, Solaris 9 and later
3. Many-to-Many Model
4. Thread Libraries
• Thread library provides programmer with API for creating and managing
threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
A. Pthreads
o May be provided either as user-level or kernel-level
o A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
o Specification, not implementation
o API specifies behavior of the thread library, implementation is up
to development of the library
o Common in UNIX operating systems (Solaris, Linux, Mac OS X)
o Pthreads Example
47
Pthreads Code for Joining 10 Threads
48
C. Java Threads
5. Implicit Threading
o Growing in popularity as numbers of threads increase, program correctness more
difficult with explicit threads
o Creation and management of threads done by compilers and run-time libraries
rather than programmers
o Three methods explored
o Thread Pools
o OpenMP
o Grand Central Dispatch
o Other methods include Microsoft Threading Building Blocks (TBB),
java.util.concurrent package
1. Thread Pools
49
o Create a number of threads in a pool where they await work
o Advantages:
o Usually slightly faster to service a request with an existing thread than
create a new thread
o Allows the number of threads in the application(s) to be bound to the
size of the pool
o Separating task to be performed from mechanics of creating task allows
different strategies for running task
▪ i.e.Tasks could be scheduled to run periodically
o Windows API supports thread pools:
2. OpenMP
o Set of compiler directives and an API for C, C++, FORTRAN
o Provides support for parallel programming in shared-memory
environments
o Identifies parallel regions – blocks of code that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
50
6. Threading Issues
1. Semantics of fork() and exec() system calls
o Does fork()duplicate only the calling thread or all threads?
▪ Some UNIXes have two versions of fork
o exec() usually works as normal – replace the running process including all
threads
2. Signal handling
o Signals are used in UNIX systems to notify a process that a particular
event has occurred.
o A signal handler is used to process signals
▪ Signal is generated by particular event
▪ Signal is delivered to a process
▪ Signal is handled by one of two signal handlers:
• default
• user-defined
o Every signal has default handler that kernel runs when handling signal
▪ User-defined signal handler can override default
▪ For single-threaded, signal delivered to process
o Where should a signal be delivered for multi-threaded?
▪ Deliver the signal to the thread to which the signal applies
▪ Deliver the signal to every thread in the process
▪ Deliver the signal to certain threads in the process
▪ Assign a specific thread to receive all signals for the process
3. Thread Cancellation
o Terminating a thread before it has finished
o Thread to be canceled is target thread
o Two general approaches:
▪ Asynchronous cancellation terminates the target thread
immediately
▪ Deferred cancellation allows the target thread to periodically
check if it should be cancelled
o Pthread code to create and cancel a thread:
4. Thread-Local Storage
o Thread-local storage (TLS) allows each thread to have its own copy of data
o Useful when you do not have control over the thread creation process (i.e.,
when using a thread pool)
51
o Different from local variables
o Local variables visible only during single function invocation
o TLS visible across function invocations
o Similar to static data
o TLS is unique to each thread
5. Scheduler Activations
o Both M:M and Two-level models require communication to maintain the
appropriate number of kernel threads allocated to the application
o Typically use an intermediate data structure between user and kernel threads –
lightweight process (LWP)
o Appears to be a virtual processor on which process can schedule user
thread to run
o Each LWP attached to kernel thread
o How many LWPs to create?
1. Windows Threads
o Windows implements the Windows API – primary API for Win 98, Win NT, Win
2000, Win XP, and Win 7
o Implements the one-to-one mapping, kernel-level
o Each thread contains
o A thread id
o Register set representing state of processor
o Separate user and kernel stacks for when thread runs in user mode or
kernel mode
o Private data storage area used by run-time libraries and dynamic link
libraries (DLLs)
o The register set, stacks, and private storage area are known as the context of
the thread
o The primary data structures of a thread include:
o ETHREAD (executive thread block) – includes pointer to process to which
thread belongs and to KTHREAD, in kernel space
o KTHREAD (kernel thread block) – scheduling and synchronization info,
kernel-mode stack, pointer to TEB, in kernel space
o TEB (thread environment block) – thread id, user-mode stack, thread-local
storage, in user space
52
2. Linux Threads
o Linux refers to them as tasks rather than threads
o Thread creation is done through clone() system call
o clone() allows a child task to share the address space of the parent task
(process)
o Flags control behavior
Summary
User-level threads are threads that are visible to the programmer and are
unknown to the kernel. The operating-system kernel supports and manages kernel-level
threads. In general, user-level threads are faster to create and manage than are kernel
threads, because no intervention from the kernel is required.
Three different types of models relate user and kernel threads. The manyto-one
model maps many user threads to a single kernel thread. The one-to-one model maps
each user thread to a corresponding kernel thread. The many-tomany model multiplexes
many user threads to a smaller or equal number of kernel threads.
Most modern operating systems provide kernel support for threads. These include
Windows, Mac OS X, Linux, and Solaris. Thread libraries provide the application
programmer with an API for creating and managing threads. Three primary thread
libraries are in common use: POSIX Pthreads, Windows threads, and Java threads.
53
system calls. Other issues include signal handling, thread cancellation, thread-local
storage, and scheduler activations.
Activity:
Direction: Read the passage carefully and plan what you will write. Place your answer
in the pad paper (yellow or white) to be submitted. Each question has 10 points each.
The Essay rubrics have a correspond points that will guide you in your essay.
“Construct your determination with Sustained Effort, Controlled Attention and Concentrated
Energy, Opportunities never come to those who wait… they are captured by those who dare to
attack” – Paul J. Meyer
Features 9-10 points 7-8 points 4-6 points 1-3 points
Expert Accomplished Capable Beginner
Understanding Writing shows Writing shows a Writing shows Writing shows
strong clear adequate little
understanding understanding understanding understanding
Quality of Piece was Piece was written Piece had little Piece had no style
Writing written in an in an interesting style
extraordinary style
style
Gives no new
Very informative Somewhat Gives some new information and
and well- informative and information but very poorly
organized organized poorly organized organized
Grammar, Virtually no Few spelling and A number of So many spelling,
Usage & spelling, punctuation spelling, punctuation and
Mechanics punctuation or errors, minor punctuation or grammatical
grammatical grammatical grammatical errors that it
errors errors errors interferes with the
meaning
QUESTIONS:
1) Provide two programming examples in which multithreading provides
better performance than a single-threaded solution.
2) What are two differences between user-level threads and kernel-level
threads? Under what circumstances is one type better than the other?
3) Describe the actions taken by a kernel to context-switch between kernel level
threads.
4) What resources are used when a thread is created? How do they differ
from those used when a process is created?
POST ASSESSMENT
Directions: The following questions cover general areas of this module. You may not
know the answers to all questions, but please attempt to answer them without asking
others or referring to books. Place your answer in a separate pad paper for you to be
submitted to the instructor.
Choose the best answer for each question and write the letter of your
choice after the number.
1. Generally manipulate files and directories even create, delete, copy, resume, and
print.
a. File modification
b. Application program
c. Communication
54
d. File management
2. View the date, time, amount of available memory, disk space, and user.
a) Status information
b) File management
c) File modification
d) PL support
3. It provides facilities like disk checking, process scanning, error loading, and
printing.
a) File modification
b) Background services
c) Communication
d) Application program
4. A text editor to create and modify file or perform transformation of the text.
a) A file modification
b) Background services
c) Application program
d) PL support
5. Software that is not typically considered as part of the Operating System and run
by the users.
a) Background services
b) Communication
c) PL support
d) Application program
6. It was written to provide the most functionality in the least space so it was not
carefully divided into modules.
a) MS-DOS
b) UNIX
c) ROM BIOS
d) Application software
7. A system software that has limited hardware functionality and limited
structuring.
a) Window 98
b) Window XP
c) UNIX
d) MS-DOS
8. In the layered operating system under layered approach the bottom layer is
called______.
a) User interface
b) Hardware
c) Application
d) UNIX
9. An operating system used in the Microkernel is.
a) Mac OSX
b) Unix
c) Solaris
d) Microsoft
10. Does the Android operating system that developed by Google is open source?
a) True
b) false
55
REFERENCES
Silberschatz, A. et. al. (2013). Operating System Concepts. 9th Edition. Hoboken, New
Jersey, USA: John Wiley & Sons, Inc.
Stallings, William (2012). Operating System Internals and Design Principles. 7th edition.
Upper Saddle River, New Jersey: Pearson Education, Inc.
56