Assignment of Computer Fundamentals

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

INTERNAL ASSIGNMENT

DBB1105 – COMPUTER FUNDAMENTALS


SET - 1
ANS01. We can define a computer as an electronic device that performs
mathematical and non-mathematical operations with the help of instructions
to process the information the information in order to achieve desired results .
We can list out the characteristics of a computer as speed, storage, accuracy,
reliability, automatic, diligence and versatility. The generation of the
computers are as follows :

First Generation Computers [1942 – 1955]


The first generation computers are vacuum tubes. These computers used
vacuum tubes for circuitry and magnetic drums for memory. These computers
relied on binary coded language to perform operations and were able to solve
only one problem at a time. They were very large in size and required a lot of
space for installation. These computers lacked in versatility and speed. Since
machine language was used, these computers were difficult to program and
use. Examples: ENIAC, EDVAC .
Second Generation Computers [1955 – 1964]
The computers used in second generation were transistors. The vacuum tubes
were replaced by transistors which are made up of semiconductor material
like germanium and silicon. The size of transistor is small so the size of the
computers was greatly reduced. These computers used magnetic cores as
primary memory and magnetic disks as secondary storage devices. Computers
became smaller, faster, cheaper and more reliable than their predecessors.
The computational time was reduced to microseconds from milliseconds.
Examples: IBM1401, IBM7090 .
Third Generation Computers [1965 – 1974]
The greatest development of the third generation was integrated circuit called
as IC. It consists of a single chip with the components like transistors and
resistors, fabricated on it. The development of the transistors helped decrease
the size of the computers as they replaced several individually wired
transistors . In first and second generation computers punched cards and
printouts were used to interact with the computers. Third generation
onwards the user started interacting through keyboard and monitors and
interfaced with an operating system. The computers became smaller and
cheaper and hence became popular. High level programming language was
used for programming. Examples: NCR 395, B6500.
Fourth Generation Computers [1975 – Till date]
The fourth generation computers use microprocessor (circuits containing
millions of transistors) as their basic processing device. A microprocessor is
built on a single piece of silicon called chip. The computers of fourth
generation led to the growth of large scale integration (LSI) and very large
scale integration (VLSI) technology. This technology helped in squeezing
thousands of transistors on a single chip. The ultra-large scale integration
(ULSI) increased the number into millions. In this way the computers became
smaller in size and cheaper .During this period computers became more
popular to the mass. This generation computers also saw the development of
the graphical user interface mouse and handheld devices. IBM worked with
Microsoft during the 1980s to start what we can really call PC (Personal
Computer) life today. IBM PC was introduced in October 1981 and it worked
with the operating system (software) called ‘Microsoft Disk Operating System
(MS DOS) 1.0. Development of MS DOS began in October 1980 when IBM
began searching the market for an operating system for the then proposed
IBM PC and major contributors were Bill Gates, Paul Allen and Tim Paterson.
In 1983, the Microsoft Windows was announced and this has witnessed
several improvements and revision over the last twenty years.
Examples: Apple II, CRAY – 1.
Fifth Generation Computers
The goal of the fifth generation computing is to develop devices that respond
to natural language input, and will have the capability of learning and self-
organization. These computers use intelligent programming (artificial
intelligence) and knowledge-based problem solving techniques. The input and
output for these machines will be in the form of graphic images or speeches.
Presently these computers are used in field of medicine, treatment planning,
monitoring etc. on a very small scale.
ANS02. a) A72E
In numeral system, we know hexadecimal is base-16 and octal is base-8.
To convert hexadecimal A72E to octal, you follow these steps:
First, convert A72E16 into decimal, by using above steps:

= A72E16
= A × 163 + 7 × 162 + 2 × 161 + E × 160
= 4279810

Now, we have to convert 4279810 to octal


42798 / 8 = 5349 with remainder 6
5349 / 8 = 668 with remainder 5
668 / 8 = 83 with remainder 4
83 / 8 = 10 with remainder 3
10 / 8 = 1 with remainder 2
1 / 8 = 0 with remainder 1
Then just write down the remainders in the reverse order to get the answer,
The hexadecimal number A72E converted to octal is therefore equal to:
123456
b) 4.BF85
First we will perform the translation through the decimal system
let's translate to decimal like this:
4.BF85
= 4 × 160+11×16-1+15×16-2+8×16-3+5×16-4 
=4×1+11×0.0625+15×0.00390625+8×0.000244140625+5×1.52587
890625E-5
= 4+0.6875+0.05859375+0.001953125+7.62939453125E-5
= 4.7481231689453110
 4.BF8516 = 4.7481231689453110
Translate the number 4.7481231689453110 в octal like this:
the Fractional part of the number is multiplied by the base of the new number
system:
.74812316894531/8 = .98499 with remainder 5
.98499/8 = .87988 with remainder 7
.87988/8 = .03906 with remainder 7
.03906/8 = .3125 with remainder 0
.3125/8 = .5 with remainder 2
.5/8 = .0 with remainder 3
.0/8 = .0 with remainder 7
.0/8 = .0 with remainder 7
.0/8 = .0 with remainder 7
.0/8 = .0 with remainder 7
The result of the conversion was: 4.7481231689453110 = 4.57702377778
The Final answer: 4.BF8516 = 4.57702377778
Now we will perform a direct translation.
Let`s do a direct translation from hexadecimal to binary like this:
4.BF8516 = 4. B F 8 5 = 4(=0100). B(=1011) F(=1111) 8(=1000) 5(=0101) =
100.10111111100001012
The Final answer: 4.BF8516 = 100.10111111100001012
Let's make a direct translation from binary to post-binary like this:
100.1011111110000101002 = 100. 101 111 111 000 010 100 = 100(=4).
101(=5) 111(=7) 111(=7) 000(=0) 010(=2) 100(=4) = 4.5770248
The Final answer: 4.BF8516 = 4.5770248
( 10101)2
Calculating Decimal Equivalent –
Step 1- 101012 = ((1 x 24) + (0 x 23) + (1 x 22) + (0 x 21) + (1 x 20))10
Step 2- 101012 = (16 + 0 + 4 + 0 + 1)10
Step 3- 101012 = 2110

ANS03. Relative cell references in Excel change automatically. It is the default


reference used by Excel. It enables you to quickly copy formulas from one
location to next (usually rows or columns).For example, =SUM(B5:B8), as
shown below, changes to =SUM(C5:C8) when copied across to the next cell .
A slide master is capable of making changes to the theme, slide layouts of a
presentation, background settings, color, fonts, placeholder, and more.
The steps to use slide master in a presentation are as follows:
Step 1 − Go to the Master Views group under the View ribbon.
Step 2 − Click on Slide Master to open the Slide Master Ribbon. The top most
slide in the left sidebar is the Master slide. All the slides within this master
template will follow the settings we add on this master slide.
Step 3 − We can make changes to the master slide in terms of the theme,
design, font properties, position and size of the title and other content using
the remaining ribbons which are still accessible.
Step 4 − While PowerPoint provides some default slide layouts, we can create
our own layouts by clicking on the "Insert Layout" in the Edit Master section
of the Slide Master ribbon.
Step 5 − We can add content placeholders to the slide layouts using the
"Insert Placeholder" in the Master Layout group under the Slide Master
ribbon. Under the Placeholder dropdown, we can either create a generic
content placeholder or specify the kind of content we want in that
placeholder.
Step 6 − We can apply different themes, background and page setup settings
to all the slides from the master slide
Step 7 − We can also customize individual slide layouts to be different from
the master slide using the menu options available with the layouts.
Transitions & Animations
A user can apply transitions and animation to the individual slides.
Transitions: Slide transitions are the animation like effects that occur in a
slide show when transitions occur from one slide to the next. Slide transition
effects can be controlled. Slide transitions are applied to an entire slide.
Animations: Slide animations are applied to individual elements within the
slides i.e. text box, chart, pictures, drawings etc. Like slide transitions, slide
animations can also be controlled.

Applying transitions
There are five broad transition categories:
1. Fades and dissolves
2. Wipes
3. Push and Cover
4. Stripes and bars
5. Random
To apply slide transitions
1. Click on View tab > presentation views (subgroup) - Click on the Slide
Sorter icon.
2. Slides will be displayed in the slide sorter view. This view makes it easy to
select, organize and manipulate individual slides.
3. Select the individual slide that you want to apply the transitions to from the
slide sorter view.
4. Click on Animations tab > transition to this slides (subgroup). Click on the
drop down arrow of the gallery (green circle). A drop down menu appears
with all the various types of transitions.
5. Select the desired transitions. The transitions are applied to the selected
slides. A live preview is available each time a transition is selected.
NOTE: to apply transitions to all the slides, click on the Apply to all icons on
the Transition to this slide subgroup.
Using Animation
In addition to transitions, animations can be added to the slide. Animations
are not added to a SLIDE as a whole as are transitions. Instead animations are
added to individual elements within a slide for example, text boxes, pictures,
text place holders, drawings etc
Applying animations
1. Click on View tab > Presentation views (subgroup) – select normal view.
2. Click on the Animations tab on the ribbon.
3. On the slide, click on the element that you would like to apply animation to.
The selected element is enclosed in a rectangular box.
4. Under the animations (subgroup), click on the Animate: drop down menu. A
user can select from one of the options.
5. Alternately, a user can define their own animation by clicking on the custom
animation icon in the animations subgroup.
6. When a user clicks on the custom animation icon, a task pane appears on
the right hand side of the window.
SET – 2

ANS04. The two design strategies for Software System Design are of the
following:
1. Functional design: The system is designed from a functional view point,
starting with a high-level view and progressively refining this into a more
detailed design. The System State is centralized and shared between the
functions operating on that state. Methods such as Jackson Structured
Programming (extension of the Jackson Structured Programming (JSP)
method. JSP, developed by Michael Jackson) and the Warnier-Orr method (The
technique is based on only a few simple principles of design that are very easy
to learn and to apply) are techniques of functional decomposition where the
structure of the data issued to determine the functional structure used to
process that data.
2. Object-oriented design: The system is viewed as a collection of objects
rather than as functions. Object-oriented design is based on the idea of
information hiding and has been described by Meyer, Booch, and Jacobsen.
And many others. JSD is a design method that falls somewhere between
function-oriented and object-oriented design.
In an object-oriented design, the System State is decentralized and each object
manages its own state information. Objects have a set of attributes defining
their state and operations, which act on these attributes. Objects are usually
members of an object class whose definition defines attributes and operations
of class members. These may be inherited from one or more super-classes so
that a class definition need only set out the differences between that class and
its super-classes. Objects communicate by exchanging messages; an object
calling a procedure associated with another object achieves most object
communication.
There is no ‘best’ design strategy, which is suitable for all projects and all
types of application. Functional and object-oriented approaches are
complementary rather than opposing techniques. Software engineers select
the most appropriate approach for each stage in the design process. In fact,
large software systems are complex entities that different approaches might
be used in the design of different parts of the system.
An object-oriented approach to software design seems to be natural at the
highest and lowest levels of system design. Using different approaches to
design may require the designer to convert his or her design from one model
to another. Many designers are not trained in multiple approaches so prefer to
use either object-oriented or functional design.
The four quality measures for building software are of the following:
 Correspondence – measures how well the delivered system matches
the needs of the operational environment, as described in the original
requirements statement.
 Validation – task of predicting correspondence.
 Correctness – measures the consistency of the product requirements
with respect to the design specification.
 Verification – exercise of determining correctness.

Validation begins as soon as the project starts, but verification can begin only
after a specification has been accepted.
Four quality measures for building software.

ANS05. The main functions of an operating System are:


1. Resource Management: The resource management function of an
operating system allocates computer resources such as CPU time, main
memory, secondary storage, and input and output devices for use. One can
view Operating Systems from two points of views: Resource manager and
extended machines. Form Resource manager point of view Operating Systems
manage the different parts of the system efficiently and from extended
machines point of view Operating Systems provide a virtual machine to users
that is more convenient to use. The structurally Operating Systems can be
design as a monolithic system, a hierarchy of layers, a virtual machine system,
a micro-kernel, or using the client-server model. The basic concepts of
Operating Systems are processes, memory management, I/O management, the
file systems, and security.
2. Data Management: Data management keeps track of the data on disk and
other storage devices. The application program deals with data by file name
and a particular location within the file. The operating system's file system
knows where the data are physically stored and interaction between the
application and operating system is through the programming interface (API).
Whenever an application needs to read or write data, it makes a call to the
operating system.
3. Job management: Job management controls the order and time in which
applications are run and is more sophisticated in the mainframe environment
where scheduling the daily work has always been routine. IBM's job control
language (JCL) was developed decades ago for that purpose. In a desktop
environment, batch files can be written to perform a sequence of operations
that can be scheduled to start at a given time.
4. Task Management: Task management, multitasking, which is the ability to
simultaneously execute multiple programs, is available in all operating
systems today. Critical in the mainframe and server environment, applications
can be prioritized to run faster or slower depending on their purpose. In the
desktop world, multitasking is necessary for keeping several applications
open at the same time so users can bounce back and forth among them.
5. Device Management: Device management controls peripheral devices by
sending them commands in their proprietary command language. The
software routine that deals with each device is called a "driver," and the
operating system requires drivers for the peripherals attached to the
computer. When a new peripheral is added, that device's driver is installed
into the operating system.
User interface – The user interacts with the operating systems through the
user interface and usually interested in the look and feel of the operating
system. The most important components of the user interface are the
command interpreter, the file system, on-line help, and application
integration. The recent trend has been toward increasingly integrated
graphical user interfaces that encompass the activities of multiple processes
on networks of computers.
The various components of operating systems are of the following:
1. Process Management: The operating system manages many kinds of
activities ranging from user programs to system programs like printer
spooler, name servers, file server etc. Each of these activities is encapsulated
in a process. A process includes the complete execution context (code, data,
PC, registers, OS, resources in use etc.).
2. Main-Memory Management: Primary-Memory or Main-Memory is a large
array of words or bytes. Each word or byte has its own address. Main-memory
provides storage that can be access directly by the CPU. That is to say for a
The major activities of an operating in regard to memory-management are:
1. Keep track of which part of memory is currently being used and by whom.
2. Decide which processes are loaded into memory when memory space
becomes available.
3. Allocate and de-allocate memory space as needed program to be executed
it must in the main memory.
3. File Management: A file is a collection of related information defined by its
creator. Computer can store files on the disk (secondary storage), which
provides long term storage. Some examples of storage media are magnetic
tape, magnetic disk and optical disk. Each of these media has its own
properties like speed, capacity, and data transfer rate and access methods. A
file system normally organized into directories to ease their use. These
directories may contain files and other directions.
4. I/O System Management: I/O subsystem hides the peculiarities of specific
hardware devices from the user. Only the device driver knows the
peculiarities of the specific device to which it is assigned.
5. Secondary-Storage Management: Normally systems have numerous
levels of storage, including primary storage, secondary storage and cache
storage. Instructions and data must be placed in primary storage or cache to
be referenced by a running program. Because main memory is too small to
accommodate all data and programs, and its data are lost when power is lost,
the computer system must provide secondary storage to back up main
memory. Secondary storage consists of tapes, disks, and other media designed
to hold information that will eventually be accessed in primary storage
(primary, secondary, cache) is ordinarily divided into bytes or words
consisting of a fixed number of bytes. Each location in storage has an address;
the set of all addresses available to a program is called an address space.
The three major activities of an operating system in regard to secondary
storage management are:
1. Scheduling the requests for memory access.
2. Managing the free space available on the secondary-storage device.
3. Allocation of storage space when new files have to be written.
ANS06. The TCP/IP protocol layers are as follows:
(i) Application layer: The application layer is provided by the program that
uses TCP/IP for communication. An application is a user process cooperating
with another process usually on a different host (there is also a benefit to
application communication within a single host). Examples of applications
include Telnet and the File Transfer Protocol (FTP). The interface between the
application and transport layers is defined by port numbers and sockets
(ii) Transport layer: The transport layer provides the end-to-end data
transfer by delivering data from an application to its remote peer. Multiple
applications can be supported simultaneously. The most-used transport layer
protocol is the Transmission Control Protocol (TCP), which provides
connection-oriented reliable data delivery, duplicate data suppression,
congestion control, and flow control. Another transport layer protocol is the
User Datagram Protocol. It provides connectionless, unreliable, best-effort
service. As a result, applications using UDP as the transport protocol have to
provide their own end-to-end integrity, flow control, and congestion control, if
desired. Usually, UDP is used by applications that need a fast transport
mechanism and can tolerate the loss of some data.
(iii) Internetwork layer: The internetwork layer, also called the internet
layer or the network layer, provides the “virtual network” image of an internet
(this layer shields the higher levels from the physical network architecture
below it). Internet Protocol (IP) is the most important protocol in this layer. It
is a connectionless protocol that does not assume reliability from lower
layers. IP does not provide reliability, flow control, or error recovery.
(iv) Network interface layer: The network interface layer, also called the
link layer or the data-link layer, is the interface to the actual network
hardware. This interface may or may not provide reliable delivery, and may be
packet or stream oriented. In fact, TCP/IP does not specify any protocol here,
but can use almost any network interface available, which illustrates the
flexibility of the IP layer.
Internet works in a network if we wish to send mail, enter the name of the
user (i.e., the user name to whom we wants to send the mail). The E-mail
system of the network sends the mail to the concerned address. This
arrangement is similar to calling a local telephone number. Anyone wanting to
access a telephone number from outside city limit must enter the area code
and the telephone number. Similarly if the two people are not in the same E-
mail system, they must enter fully qualified addresses. Between these two
persons there may be many networks. Once the e-mail is sent, the message is
broken into small pieces called “packets”. Packets are the basic unit of
measurement on the Internet. There are special purpose computers called
“Routers” on the Internet, which will decide what is the best path to the
destination to send these packets. Once these packets reach their destination
they are reassembled into the original message. The Internet has been
described as co-operative anarchy. Each individual network has its own rules.
Communication between networks is possible because of co-operation. There
is no central administration of the Internet but there are formal bodies within
the Internet that perform coordinating functions.

You might also like