Popular Computer Coding Systems
Popular Computer Coding Systems
INTRODUCTION
This assignment deals with the computer coding system. Many different coding systems are used to represent data today. What is the importance of coding systems To represent numeric, alphabetic, and special characters in a computer's internal storage and on magnetic media, we must use some sort of coding system. In computers, the code is made up of fixed size groups of binary positions. Each binary position in a group is assigned a specific value; for example 8, 4, 2, or 1. In this way, every character can be represented by a combination of bits that is different from any other combination. Here we will learn about popular coding systems, used to represent data. The coding systems included are Binary, Octal, Hexadecimal, BCD and American Standard Code for Information Interchange (ASCII), Grey code, Excess three codes.
the numeral and the remaining bits having no significance. Packed: two numerals are encoded into a single byte, with one numeral in the least
significant nibble (bits 0-3) and the other numeral in the most significant nibble (bits 4-7). Hence the numerical range for one uncompressed BCD byte is zero through nine inclusive, whereas the range for one packed BCD is zero through ninety-nine inclusive.
To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. Note that the most significant nibble of the most significant byte is zero, implying that the number is in actuality012345. Also note how packed BCD is more efficient in storage usage as compared to uncompressed BCD; encoding the same number in uncompressed format would consume 100 percent more storage. Shifting and masking operations are used to pack or unpack a packed BCD digit. Other logical operations are used to convert a numeral to its equivalent bit pattern or reverse the process. Hence the numerical range for one uncompressed BCD byte is zero through nine inclusive, whereas the range for one packed BCD is zero through ninety-nine inclusive.
HEXADECIMAL
In mathematics and computer science, hexadecimal (also base 16, or hex) is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 09 to represent values zero to nine, and A,B,C,D,E,F (or alternatively af) to represent values ten to fifteen. For example, the hexadecimal number 2AF3 is equal, in decimal, to (2163) + (10162) + (15161) + (3160), or 10995. Each hexadecimal digit represents four binary digits (bits), and the primary use of hexadecimal notation is a human-friendly representation of binary-coded values in computing and digital electronics. One hexadecimal digit represents a nibble, which is half of an octet (8 bits). For example, byte values can range from 0 to 255 (decimal), but may be more conveniently represented as two hexadecimal digits in the range 00 to FF. Hexadecimal is also commonly used to represent computer memory addresses.
UNICODE
Unicode is a computing industry standard for the consistent encoding, representation and with the Universal Character Set standard and published in book form as The Unicode Standard, the latest version of Unicode consists of a repertoire of more than 110,000characters covering 100 scripts, a set of code charts for visual reference, an encoding methodology and set of standard character encodings, an enumeration of handling of text expressed in most of the world's writing systems. Developed in conjunction
character properties such as upper and lower case, a set of reference data computer files, and a number of related items, such as character properties, rules for normalization, decomposition, collation, rendering, and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic and Hebrew, and left-to-right scripts). As of 2012, the most recent version is Unicode 6.1. Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been the Microsoft .NET Framework, and modern operating systems. implemented in many recent technologies, including XML, the Java programming language, Unicode can be implemented by different character encodings. The most commonly used encodings are UTF-8,UTF-16 and the now-obsolete UCS-2. UTF-8 uses one byte for any ASCII characters, which have the same code values in both UTF-8 and ASCII encoding, and up to four bytes for other characters. UCS-2 uses a 16-bit code unit (two 8-bit bytes) for each character but cannot encode every character in the current Unicode standard.
UTF-16 extends UCS-2, using two 16-bit units (4 8 bit) to handle each of the additional characters. A fixed-width, 16-bit worldwide character encoding that was developed and is maintained and promoted by the Unicode consortium, a non-profit computer industry organization. Unicode can represent most of the world languages. Because of that Unicode is important.
CONCLUSION
These coding systems helps human to communicate with or through computers. So the need for coding systems is clear. In this age of computers, it is essential. More than this many coding systems exists here now. According to the increasing needs more and more developments are there in this field. As computer students we must understand about the updates in these systems. Coding systems included in this are different from one another. But still there are some similarities. Knowing and understanding about coding systems is fun and essential.