0% found this document useful (0 votes)
44 views

Escondo SG 2

The document contains responses to 15 questions about computer architecture and organization topics such as binary representations of numbers, signed integers, ASCII, Unicode, error detection techniques like cyclic redundancy checks and Hamming codes. Key terms and concepts explained include bits, bytes, nibbles, words, positional numbering systems, binary-to-decimal conversion methods, signed integer representations, ASCII, Unicode, Manchester coding, run-length limited encoding, cyclic redundancy checks, systematic error detection, and Hamming codes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Escondo SG 2

The document contains responses to 15 questions about computer architecture and organization topics such as binary representations of numbers, signed integers, ASCII, Unicode, error detection techniques like cyclic redundancy checks and Hamming codes. Key terms and concepts explained include bits, bytes, nibbles, words, positional numbering systems, binary-to-decimal conversion methods, signed integer representations, ASCII, Unicode, Manchester coding, run-length limited encoding, cyclic redundancy checks, systematic error detection, and Hamming codes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Elijah Ethanli D.

Escondo

BSIT

Computer Architecture and Organization

1.Explain how the terms bit, byte, nibble, and word are related.

When each one or zero in a binary integer is termed a bit. A nibble is four bits long, whereas a byte is
eight bits long. When working with binary, the term "bytes" is widely used. Processors are built to deal
with the number of bits that specify the length, such as 8, 16, 32, 64, and many others.

2. Why are binary and decimal called positional numbering systems?

The conventional number system is referred to as a positional numbering system. To represent the
number, a number sequence is employed. Weight is added to each digit location where the value of a
number equals the weighting factor of the digits. A numerical system's base of decimal digits is generally
the number of distinct digits. Zero is a positional number system that is used to represent numbers.

3. How many of the "numbers to remember" (in all bases) from Figure 2.1 can you remember?

13 numbers

4. Name the three ways in which signed integers can be represented in digital computers and explain
the differences.

The signed magnitude representation, one's complement representation, and two's complement
representation are the three ways signed integers can be expressed. People can grasp the
representation with signed magnitude, which requires complex computer technology. A significant
binary number is often represented using one's complement format. It also considers other
mathematical operations such as addition and subtraction. In expressing a signed binary number, we
may show both positive and negative values. Positive and negative values are represented through a
two's complement representation. Each bit's weight is a power of two minus the significant digit's
weight, which is the negative corresponding power of two.

5. Which one of the three integer representations is used most often by digital computer systems?

The two's complement representation is the most often used integer representation because it is better
for negative values than the signed magnitude representation.

6. Do you think that double-dabble is an easier method than the other binary-to-decimal conversion
methods explained in this chapter? Why?

The double-dabble approach is distinguished from the other binary to decimal conversion methods by
the absence of powers or huge numbers, as shown in the first method, Positional Notation.
7. What is EBCDIC, and how is it related to BCD?

EBCDIC is an acronym that stands for Extended Binary Coded Decimal Interchange Code. IBM created
the code, which is used as an 8-bit code to display 256 symbols. The BCD code is a 4-bit code used to
display the values of numbers that do not contain letters.

8. What is ASCII and how did it originate?

ASCII stands for American Standard Code for Information Interchange, a standardized coding scheme
that represents text in digital communication. It evolved from a 7-bit code promoted by Bell data
services when the organization's work began in May 1961 with the X3 of the American Standards
Association.

9. How many bits does a Unicode character require?

16 bits.

10. Why was Unicode created?

The Unicode standard was developed in response to the necessity for a new character encoding system.
Unicode's purpose is to incorporate all of the different encoding schemes in order to eliminate
computer misunderstanding as much as possible.

11. Why is Manchester coding not a good choice for writing data to a magnetic disk?

Manchester coding is a binary phase-shift keying (BPSK) technique in which the data controls the stage
of a square wave carrier with a rated frequency comparable to the data rate. The Manchester code
ensures that voltage level fluctuations are frequent and directly proportional to the clock's pace,
assisting in clock recovery.

12. Explain how run-length-limited encoding works.

Run-length-limited (RLL) regulates the length of repeated stretch bits (runs) in the presence of a
constant signal. When the runs are too long, clock recovery is difficult. When the runs are too short, the
communication platform may lower the high frequencies.

13. How do cyclic redundancy checks work?

A cyclic redundancy check (CRC) is a type of error-detecting code that is extensively used in data
networks and disk devices to detect unintended data changes. Based on the remaining polynomial
division of the data blocks entering these systems, a brief check value is applied.

14. What is systematic error detection?

Systematic error detection is critical for interpreting measurement data. The t detection and residual
error techniques are established statistical theories-based procedures that need large sample sets and
defined distributions.

15. What is a Hamming code?

The number of superfluous bits in Hamming code is determined by the message's information bits. It
simply employs additional parity bits to identify message errors. For example, if 4-bit information is to
be conveyed, n = 4. The number of redundant bits is determined via trial and error. In the previous
equation, 4 cannot be more than 7.

You might also like