Database Management System: Citation Needed
Database Management System: Citation Needed
A Database Management System (DBMS) is a set of computer programs that controls the
creation, maintenance, and the use of a database. It allows organizations to place control of
database development in the hands of database administrators (DBAs) and other specialists. A
DBMS is a system software package that helps the use of integrated collection of data records
and files known as databases. It allows different user application programs to easily access the
same database. DBMSs may use any of a variety of database models, such as the network model
or relational model. In large systems, a DBMS allows users and other software to store and
retrieve data in a structured way. Instead of having to write computer programs to extract
information, user can ask simple questions in a query language. Thus, many DBMS packages
provide Fourth-generation programming language (4GLs) and other application development
features. It helps to specify the logical organization for a database and access and use the
information within a database. It provides facilities for controlling data access, enforcing data
integrity, managing concurrency, and restoring the database from backups. A DBMS also
provides the ability to logically present database information to users.
Overview
A DBMS is a set of software programs that controls the organization, storage, management, and
retrieval of data in a database. DBMSs are categorized according to their data structures or types.
The DBMS accepts requests for data from an application program and instructs the operating
system to transfer the appropriate data. The queries and responses must be submitted and
received according to a format that conforms to one or more applicable protocols. When a
DBMS is used, information systems can be changed much more easily as the organization's
information requirements change. New categories of data can be added to the database without
disruption to the existing system.
Database servers are computers that hold the actual databases and run only the DBMS and
related software. Database servers are usually multiprocessor computers, with generous memory
and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one
or more servers via a high-speed channel, are also used in large volume transaction processing
environments. DBMSs are found at the heart of most database applications. DBMSs may be built
around a custom multitasking kernel with built-in networking support, but modern DBMSs
typically rely on a standard operating system to provide these functions.[citation needed]
A DBMS includes four main parts: modeling language, data structure, database query language,
and transaction mechanisms:
[edit] Components of DBMS
DBMS Engine accepts logical request from the various other DBMS subsystems, converts them
into physical equivalents, and actually accesses the database and data dictionary as they exist on
a storage device.
Data Definition Subsystem helps user to create and maintain the data dictionary and define the
structure of the files in a database.
Data Manipulation Subsystem helps user to add, change, and delete information in a database
and query it for valuable information. Software tools within the data manipulation subsystem
are most often the primary interface between user and the information contained in a database.
It allows user to specify its logical information requirements.
Application Generation Subsystem contains facilities to help users to develop transaction-
intensive applications. It usually requires that user perform a detailed series of tasks to process a
transaction. It facilitates easy-to-use data entry screens, programming languages, and
interfaces.
Data Administration Subsystem helps users to manage the overall database environment by
providing facilities for backup and recovery, security management, query optimization,
concurrency control, and change management.
A data modeling language to define the schema of each database hosted in the DBMS, according
to the DBMS database model. The four most common types of models are the:
hierarchical model,
network model,
relational model, and
object model.
Inverted lists and other methods are also used. A given database management system may
provide one or more of the four models. The optimal structure depends on the natural
organization of the application's data, and on the application's requirements (which include
transaction rate (speed), reliability, maintainability, scalability, and cost).
The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of
purists who believe this model is a corruption of the relational model, since it violates several of
its fundamental principles for the sake of practicality and performance. Many DBMSs also
support the Open Database Connectivity API that supports a standard way for programmers to
access the DBMS.
Before the database management approach, organizations relied on file processing systems to
organize, store, and process data files. End users became aggravated with file processing because
data is stored in many different files and each organized in a different way. Each file was
specialized to be used with a specific application. Needless to say, file processing was bulky,
costly and nonflexible when it came to supplying needed data accurately and promptly. Data
redundancy is an issue with the file processing system because the independent data files
produce duplicate data so when updates were needed each separate file would need to be
updated. Another issue is the lack of data integration. The data is dependent on other data to
organize and store it. Lastly, there was not any consistency or standardization of the data in a file
processing system which makes maintenance difficult. For all these reasons, the database
management approach was produced. Database management systems (DBMS) are designed to
use one of five database structures to provide simplistic access to information stored in
databases. The five database structures are hierarchical, network, relational, multidimensional
and object-oriented models.
The hierarchical structure was used in early mainframe DBMS. Records’ relationships form a
treelike model. This structure is simple but nonflexible because the relationship is confined to a
one-to-many relationship. IBM’s IMS system and the RDM Mobile are examples of a
hierarchical database system with multiple hierarchies over the same data. RDM Mobile is a
newly designed embedded database for a mobile computer system. The hierarchical structure is
used primarily today for storing geographic information and file systems.
The network structure consists of more complex relationships. Unlike the hierarchical structure,
it can relate to many records and accesses them by following one of several paths. In other
words, this structure allows for many-to-many relationships.
The relational structure is the most commonly used today. It is used by mainframe, midrange and
microcomputer systems. It uses two-dimensional rows and columns to store data. The tables of
records can be connected by common key values. While working for IBM, E.F. Codd designed
this structure in 1970. The model is not easy for the end user to run queries with because it may
require a complex combination of many tables.
The multidimensional structure is similar to the relational model. The dimensions of the cube
looking model have data relating to elements in each cell. This structure gives a spreadsheet like
view of data. This structure is easy to maintain because records are stored as fundamental
attributes, the same way they’re viewed and the structure is easy to understand. Its high
performance has made it the most popular database structure when it comes to enabling online
analytical processing (OLAP).
The object oriented structure has the ability to handle graphics, pictures, voice and text, types of
data, without difficultly unlike the other database structures. This structure is popular for
multimedia Web-based applications. It was designed to work with object-oriented programming
languages such as Java.
Data structures (fields, records, files and objects) optimized to deal with very large amounts of
data stored on a permanent data storage device (which implies relatively slow access compared
to volatile main memory).
[edit] Database query language
A database query language and report writer allows users to interactively interrogate the
database, analyze its data and update it according to the users privileges on data. It also controls
the security of the database. Data security prevents unauthorized users from viewing or updating
the database. Using passwords, users are allowed access to the entire database or subsets of it
called subschemas. For example, an employee database can contain all the data about an
individual employee, but one group of users may be authorized to view only payroll data, while
others are allowed access to only work history and medical data.
If the DBMS provides a way to interactively enter and update the database, as well as interrogate
it, this capability allows for managing personal databases. However, it may not leave an audit
trail of actions or provide the kinds of controls necessary in a multi-user organization. These
controls are only available when a set of application programs are customized for each data entry
and updating function.
A database transaction mechanism ideally guarantees ACID properties in order to ensure data
integrity despite concurrent user accesses (concurrency control), and faults (fault tolerance). It
also maintains the integrity of the data in the database. The DBMS can maintain the integrity of
the database by not allowing more than one user to update the same record at the same time. The
DBMS can help prevent duplicate records via unique index constraints; for example, no two
customers with the same customer numbers (key fields) can be entered into the database. See
ACID properties for more information (Redundancy avoidance).
A DBMS minimizes these problems by providing two views of the database data: an external
view(or User view), logical view(or conceptual view)and physical(or internal) view. The user’s
view, of a database program represents data in a format that is meaningful to a user and to the
software programs that process those data.
One strength of a DBMS is that while there is typically only one conceptual (or logical) and
physical (or Internal) view of the data, there can be an endless number of different External
views. This feature allows users to see database information in a more business-related way
rather than from a technical, processing viewpoint. Thus the logical view refers to the way user
views data, and the physical view to the way the data are physically stored and processed.
Alternatively, and especially in connection with the relational model of database management,
the relation between attributes drawn from a specified set of domains can be seen as being
primary. For instance, the database might indicate that a car that was originally "red" might fade
to "pink" in time, provided it was of some particular "make" with an inferior paint job. Such
higher arity relationships provide information on all of the underlying domains at the same time,
with none of them being privileged above the others.
Data base management system is the system in which related data is stored in an "efficient" and
"compact" manner. Efficient means that the data which is stored in the DBMS is accessed in very
quick time and compact means that the data which is stored in DBMS covers very little space in
the computer's memory. In the above definition the phrase "related data" is used which means
that the data which is stored in the DBMS is about some particular topic.
Throughout recent history specialized databases have existed for scientific, geospatial, imaging,
document storage and like uses. Functionality drawn from such applications has lately begun
appearing in mainstream DBMSs as well. However, the main focus there, at least when aimed at
the commercial data processing market, is still on descriptive attributes on repetitive record
structures.
Thus, the DBMSs of today roll together frequently needed services or features of attribute
management. By externalizing such functionality to the DBMS, applications effectively share
code with each other and are relieved of much internal complexity. Features commonly offered
by database management systems include:
Query ability
Querying is the process of requesting attribute information from various perspectives and
combinations of factors. Example: "How many 2-door cars in Texas are green?" A database
query language and report writer allow users to interactively interrogate the database, analyze
its data and update it according to the users privileges on data.
Copies of attributes need to be made regularly in case primary disks or other equipment fails. A
periodic copy of attributes may also be created for a distant organization that cannot readily
access the original. DBMS usually provide utilities to facilitate the process of extracting and
disseminating attribute sets. When data is replicated between database servers, so that the
information remains consistent throughout the database system and users cannot tell or even
know which server in the DBMS they are using, the system is said to exhibit replication
transparency.
Rule enforcement
Often one wants to apply rules to attributes so that the attributes are clean and reliable. For
example, we may have a rule that says each car can have only one engine associated with it
(identified by Engine Number). If somebody tries to associate a second engine with a given car,
we want the DBMS to deny such a request and display an error message. However, with
changes in the model specification such as, in this example, hybrid gas-electric cars, rules may
need to change. Ideally such rules should be able to be added and removed as needed without
significant data layout redesign.
Security
Often it is desirable to limit who can see or change which attributes or groups of attributes. This
may be managed directly by individual, or by the assignment of individuals and privileges to
groups, or (in the most elaborate models) through the assignment of individuals and groups to
roles which are then granted entitlements.
Computation
There are common computations requested on attributes such as counting, summing, averaging,
sorting, grouping, cross-referencing, etc. Rather than have each computer application
implement these from scratch, they can rely on the DBMS to supply such calculations.
Often one wants to know who accessed what attributes, what was changed, and when it was
changed. Logging services allow this by keeping a record of access occurrences and changes.
Automated optimization
If there are frequently occurring usage patterns or requests, some DBMS can adjust themselves
to improve the speed of those interactions. In some cases the DBMS will merely provide tools to
monitor performance, allowing a human expert to make the necessary adjustments after
reviewing the statistics collected.
Bold text
Metadata is data describing data. For example, a listing that describes what attributes are allowed
to be in data sets is called "meta-information". The meta-data is also known as data about data.
In 1998, database management was in need of new style databases to solve current database
management problems. Researchers realized that the old trends of database management were
becoming too complex and there was a need for automated configuration and management [5].
Surajit Chaudhuri, Gerhard Weikum and Michael Stonebraker, were the pioneers that
dramatically affected the thought of database management systems [5]. They believed that
database management needed a more modular approach and that there are so many specifications
needs for various users [5]. Since this new development process of database management we
currently have endless possibilities. Database management is no longer limited to “monolithic
entities” [5]. Many solutions have developed to satisfy individual needs of users. Development of
numerous database options has created flexible solutions in database management.
Today there are several ways database management has affected the technology world as we
know it. Organizations demand for directory services has become an extreme necessity as
organizations grow. Businesses are now able to use directory services that provided prompt
searches for their company information [5]. Mobile devices are not only able to store contact
information of users but have grown to bigger capabilities. Mobile technology is able to cache
large information that is used for computers and is able to display it on smaller devices [5]. Web
searches have even been affected with database management. Search engine queries are able to
locate data within the World Wide Web [5]. Retailers have also benefited from the developments
with data warehousing. These companies are able to record customer transactions made within
their business [5]. Online transactions have become tremendously popular with the e-business
world. Consumers and businesses are able to make payments securely on company websites.
None of these current developments would have been possible without the evolution of database
management. Even with all the progress and current trends of database management, there will
always be a need for new development as specifications and needs grow.
As the speeds of consumer internet connectivity increase, and as data availability and computing
become more ubiquitous, database are seeing migration to web services. Web based languages
such as XML and PHP are being used to process databases over web based services. These
languages allow databases to live in "the cloud." As with many other products, such as Google's
GMail, Microsoft's Office 2010, and Carbonite's online backup services, many services are
beginning to move to web based services due to increasing internet reliability, data storage
efficiency, and the lack of a need for dedicated IT staff to manage the hardware. Faculty at
Rochester Institute of Technology published a paper regarding the use of databases in the cloud
and state that their school plans to add cloud based database computing to their curriculum to
"keep [their] information technology (IT) curriculum at the forefront of technology.[6]
Advanced DBMS: (i.e)D DBMS: Distributed Data Base Management System:A distributed
database is a collection of data which belong logically to the same system but are spread over the
sites of the computer network.The two aspects of a distributed database are :
1.Distribution
2.Logical correlation
Distribution:The fact that the data are not resident at the same site,so that we can distinguish a
distributed database from a single,centralized database. Logical Correlation:The fact that the data
have some properties which tie them together,so that we can distinguish a distributed database
from a set of local databases or files which are resident at different sites of a computer network.
Presentation program
A presentation program is a computer software package used to display information, normally
in the form of a slide show. It typically includes three major functions: an editor that allows text
to be inserted and formatted, a method for inserting and manipulating graphic images and a slide-
show system to display the content.
A presentation program is supposed to help both: the speaker with an easier access to his ideas
and the participants with visual information which complements the talk. There are many
different types of presentations including professional (work-related), education, entertainment,
and for general communication. Presentation programs can either supplement or replace the use
of older visual aid technology, such as Pamphlets, handouts, chalkboards, flip charts, posters,
slides and overhead transparencies. Text, graphics, movies, and other objects are positioned on
individual pages or "slides" or "foils". The "slide" analogy is a reference to the slide projector, a
device that has become somewhat obsolete due to the use of presentation software. Slides can be
printed, or (more usually) displayed on-screen and navigated through at the command of the
presenter. Transitions between slides can be animated in a variety of ways, as can the emergence
of elements on a slide itself. Typically a presentation has many constraints and the most
important being the limited time to present consistent information.
Recently a new presentation paradigm has emerged: zooming presentation programs (eg.
AHEAD and Prezi. Instead of individual slides these ZUIs (zoom user interface) are based on
one infinite canvas on which all content is presented. This allows for non-linear presentations,
the option to present richer detail of content, and to give a better overview and understanding of
complex visual messages and relations.
Many presentation programs come with pre-designed images (clip art) and/or have the ability to
import graphic images. Custom graphics can also be created in other programs such as Adobe
Photoshop or Adobe Illustrator and then exported. The concept of clip art originated with the
image library that came as a complement with VCN ExecuVision, beginning in 1983.
With the growth of digital photography and video, many programs that handle these types of
media also include presentation functions for displaying them in a similar "slide show" format.
For example, Apple's iPhoto allows groups of digital photos to be displayed in a slide show with
options such as selecting transitions, choosing whether or not the show stops at the end or
continues to loop, and including music to accompany the photos.
Similar to programming extensions for an operating system or web browser, "add ons" or plugins
for presentation programs can be used to enhance their capabilities. For example, it would be
useful to export a PowerPoint presentation as a Flash animation or PDF document. This would
make delivery through removable media or sharing over the Internet easier. Since PDF files are
designed to be shared regardless of platform and most web browsers already have the plugin to
view Flash files, these formats would allow presentations to be more widely accessible.
Certain presentation programs also offer an interactive integrated hardware element designed to
engage an audience (e.g. audience response systems) or facilitate presentations across different
geographical locations (e.g. web conferencing). Other integrated hardware devices ease the job
of a live presenter such as laser pointers and interactive whiteboards.
Adobe Persuasion
AppleWorks
Authorstream
Beamer (LaTeX)
BRUNO Hewlett Packard
CA-Cricket Presents
CustomShow
Digitalsoft Keypoint
Google Docs
Harvard Graphics
HyperCard
IBM Lotus Freelance Graphics
IBM Lotus Symphony
Keynote
KPresenter
Macromedia Director
MagicPoint
Microsoft PowerPoint
NeoOffice Impress
Openlp.org
OpenMind
OpenOffice.org Impress
PicturesToExe
Photo slideshow software
Powerdot
Prezi
S5 (file format)
Sales Graphics
Scala Multimedia
Screencast
Slide Effect
SlideRocket
SoftMaker Presentations
SongPro
SpicyNodes
Tech Talk PSE
VCN ExecuVision
VUE (Visual Understanding Environment)
Web based presentation tools
Worship presentation program
Zoho
Network function
Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP
packets can be lost, duplicated, or delivered out of order. TCP detects these problems, requests
retransmission of lost packets, rearranges out-of-order packets, and even helps minimize network
congestion to reduce the occurrence of the other problems. Once the TCP receiver has finally
reassembled a perfect copy of the data originally transmitted, it passes that datagram to the
application program. Thus, TCP abstracts the application's communication from the underlying
networking details.
TCP is used extensively by many of the Internet's most popular applications, including the World
Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and
some streaming media applications.
TCP is optimized for accurate delivery rather than timely delivery, and therefore, TCP
sometimes incurs relatively long delays (in the order of seconds) while waiting for out-of-order
messages or retransmissions of lost messages. It is not particularly suitable for real-time
applications such as Voice over IP. For such applications, protocols like the Real-time Transport
Protocol (RTP) running over the User Datagram Protocol (UDP) are usually recommended
instead.[2]
TCP is a reliable stream delivery service that guarantees delivery of a data stream sent from one
host to another without duplication or losing data. Since packet transfer is not reliable, a
technique known as positive acknowledgment with retransmission is used to guarantee reliability
of packet transfers. This fundamental technique requires the receiver to respond with an
acknowledgment message as it receives the data. The sender keeps a record of each packet it
sends, and waits for acknowledgment before sending the next packet. The sender also keeps a
timer from when the packet was sent, and retransmits a packet if the timer expires. The timer is
needed in case a packet gets lost or corrupted.[2]
TCP consists of a set of rules: for the protocol, that are used with the Internet Protocol, and for
the IP, to send data "in a form of message units" between computers over the Internet. At the
same time that IP takes care of handling the actual delivery of the data, TCP takes care of
keeping track of the individual units of data transmission, called segments, that a message is
divided into for efficient routing through the network. For example, when an HTML file is sent
from a Web server, the TCP software layer of that server divides the sequence of bytes of the file
into segments and forwards them individually to the IP software layer (Internet Layer). The
Internet Layer encapsulates each TCP segment into an IP packet by adding a header that includes
(among other data) the destination IP address. Even though every packet has the same
destination address, they can be routed on different paths through the network. When the client
program on the destination computer receives them, the TCP layer (Transport Layer)
reassembles the individual segments and ensures they are correctly ordered and error free as it
streams them to an application.
A TCP segment consists of a segment header and a data section. The TCP header contains 10
mandatory fields, and an optional extension field (Options, pink background in table).
The data section follows the header. Its contents are the payload data carried for the application.
The length of the data section is not specified in the TCP segment header. It can be calculated by
subtracting the combined length of the TCP header and the encapsulating IP segment header
from the total IP segment length (specified in the IP segment header).
Internet Protocol
The Internet Protocol (IP) is a protocol used for communicating data across a packet-switched
internetwork using the Internet Protocol Suite, also referred to as TCP/IP.
IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of
delivering distinguished protocol datagrams (packets) from the source host to the destination host
solely based on their addresses. For this purpose the Internet Protocol defines addressing
methods and structures for datagram encapsulation. The first major version of addressing
structure, now referred to as Internet Protocol Version 4 (IPv4) is still the dominant protocol of
the Internet, although the successor, Internet Protocol Version 6 (IPv6) is being deployed
actively worldwide.
Services provided by IP
The Internet Protocol is responsible for addressing hosts and routing datagrams (packets) from a
source host to the destination host across one or more IP networks. For this purpose the Internet
Protocol defines an addressing system that has two functions. Addresses identify hosts and
provide a logical location service. Each packet is tagged with a header that contains the meta-
data for the purpose of delivery. This process of tagging is also called encapsulation.
IP is a connectionless protocol and does not need circuit setup prior to transmission.
[edit] Reliability
The design principles of the Internet protocols assume that the network infrastructure is
inherently unreliable at any single network element or transmission medium and that it is
dynamic in terms of availability of links and nodes. No central monitoring or performance
measurement facility exists that tracks or maintains the state of the network. For the benefit of
reducing network complexity, the intelligence in the network is purposely mostly located in the
end nodes of each data transmission, cf. end-to-end principle. Routers in the transmission path
simply forward packets to the next known local gateway matching the routing prefix for the
destination address.
As a consequence of this design, the Internet Protocol only provides best effort delivery and its
service can also be characterized as unreliable. In network architectural language it is a
connection-less protocol, in contrast to so-called connection-oriented modes of transmission. The
lack of reliability allows any of the following fault events to occur:
data corruption
lost data packets
duplicate arrival
out-of-order packet delivery; meaning, if packet 'A' is sent before packet 'B', packet 'B' may
arrive before packet 'A'. Since routing is dynamic and there is no memory in the network about
the path of prior packets, it is possible that the first packet sent takes a longer path to its
destination.
The only assistance that the Internet Protocol provides in Version 4 (IPv4) is to ensure that the IP
packet header is error-free through computation of a checksum at the routing nodes. This has the
side-effect of discarding packets with bad headers on the spot. In this case no notification is
required to be sent to either end node, although a facility exists in the Internet Control Message
Protocol (ICMP) to do so.
IPv6, on the other hand, has abandoned the use of IP header checksums for the benefit of rapid
forwarding through routing elements in the network.
The resolution or correction of any of these reliability issues is the responsibility of an upper
layer protocol. For example, to ensure in-order delivery the upper layer may have to cache data
until it can be passed to the application.
In addition to issues of reliability, this dynamic nature and the diversity of the Internet and its
components provide no guarantee that any particular path is actually capable of, or suitable for,
performing the data transmission requested, even if the path is available and reliable. One of the
technical constraints is the size of data packets allowed on a given link. An application must
assure that it uses proper transmission characteristics. Some of this responsibility lies also in the
upper layer protocols between application and IP. Facilities exist to examine the maximum
transmission unit (MTU) size of the local link, as well as for the entire projected path to the
destination when using IPv6. The IPv4 internetworking layer has the capability to automatically
fragment the original datagram into smaller units for transmission. In this case, IP does provide
re-ordering of fragments delivered out-of-order.[1]
Transmission Control Protocol (TCP) is an example of a protocol that will adjust its segment size
to be smaller than the MTU. User Datagram Protocol (UDP) and Internet Control Message
Protocol (ICMP) disregard MTU size thereby forcing IP to fragment oversized datagrams.[2]
Perhaps the most complex aspects of IP are IP addressing and routing. Addressing refers to how
end hosts become assigned IP addresses and how subnetworks of IP host addresses are divided
and grouped together. IP routing is performed by all hosts, but most importantly by internetwork
routers, which typically use either interior gateway protocols (IGPs) or external gateway
protocols (EGPs) to help make IP datagram forwarding decisions across IP connected networks.
Computer crime
From Wikipedia, the free encyclopedia
On the global level, both governments and non-state actors continue to grow in importance, with
the ability to engage in such activities as espionage, financial theft, and other cross-border crimes
sometimes referred to as cyber warfare. The international legal system is attempting to hold
actors accountable for their actions, with the International Criminal Court among the few
addressing this threat.[3]
Contents
[hide]
1 Topology
o 1.1 Spam
o 1.2 Fraud
o 1.3 Obscene or offensive content
o 1.4 Harassment
o 1.5 Drug trafficking
o 1.6 Cyberterrorism
o 1.7 Cyber warfare
2 Documented cases
3 See also
4 References
5 Further reading
6 External links
o 6.1 Government resources
[edit] Topology
Computer crime encompasses a broad range of potentially illegal activities. Generally, however,
it may be divided into one of two types of categories: (1) crimes that target computer networks or
devices directly; (2) crimes facilitated by computer networks or devices, the primary target of
which is independent of the computer network or device.[citation needed]
Examples of crimes that primarily target computer networks or devices would include:
Computer viruses
Denial-of-service attacks
Malware (malicious code)
Examples of crimes that merely use computer networks or devices would include:
Cyber stalking
Fraud and identity theft
Information warfare
Phishing scams
A computer can be a source of evidence. Even though the computer is not directly used for
criminal purposes, it is an excellent device for record keeping, particularly given the power to
encrypt the data. If this evidence can be obtained and decrypted, it can be of great value to
criminal investigators.
[edit] Spam
Spam, or the unsolicited sending of bulk email for commercial purposes, is unlawful to varying
degrees. As applied to email, specific anti-spam laws are relatively new, however limits on
unsolicited electronic communications have existed in some forms for some time.[4]
[edit] Fraud
Computer fraud is any dishonest misrepresentation of fact intended to let another to do or refrain
from doing something which causes loss.[citation needed] In this context, the fraud will result in
obtaining a benefit by:
Altering computer input in an unauthorized way. This requires little technical expertise and is
not an uncommon form of theft by employees altering the data before entry or entering false
data, or by entering unauthorized instructions or using unauthorized processes;
Altering, destroying, suppressing, or stealing output, usually to conceal unauthorized
transactions: this is difficult to detect;
Altering or deleting stored data;
Altering or misusing existing system tools or software packages, or altering or writing code for
fraudulent purposes.
Other forms of fraud may be facilitated using computer systems, including bank fraud, identity
theft, extortion, and theft of classified information.
The content of websites and other electronic communications may be distasteful, obscene or
offensive for a variety of reasons. In some instances these communications may be illegal.
Many[quantify] jurisdictions place limits on certain speech and ban racist, blasphemous, politically
subversive, libelous or slanderous, seditious, or inflammatory material that tends to incite hate
crimes.
The extent to which these communications are unlawful varies greatly between countries, and
even within nations. It is a sensitive area in which the courts can become involved in arbitrating
between groups with entrenched beliefs.
One area of Internet pornography that has been the target of the strongest efforts at curtailment is
child pornography.
[edit] Harassment
Whereas content may be offensive in a non-specific way, harassment directs obscenities and
derogatory comments at specific individuals focusing for example on gender, race, religion,
nationality, sexual orientation. This often occurs in chat rooms, through newsgroups, and by
sending hate e-mail to interested parties (see cyber bullying, cyber stalking, harassment by
computer, hate crime, Online predator, and stalking). Any comment that may be found
derogatory or offensive is considered harassment.
Drug traffickers are increasingly taking advantage of the Internet to sell their illegal substances
through encrypted e-mail and other Internet Technology.[citation needed] Some drug traffickers arrange
deals at internet cafes, use courier Web sites to track illegal packages of pills, and swap recipes
for amphetamines in restricted-access chat rooms.
The rise in Internet drug trades could also be attributed to the lack of face-to-face
communication. These virtual exchanges allow more intimidated individuals to more
comfortably purchase illegal drugs. The sketchy effects that are often associated with drug trades
are severely minimized and the filtering process that comes with physical interaction fades away.
Furthermore, traditional drug recipes were carefully kept secrets. But with modern computer
technology, this information is now being made available to anyone with computer access.
[edit] Cyberterrorism
Cyberterrorism in general, can be defined as an act of terrorism committed through the use of
cyberspace or computer resources (Parker 1983). As such, a simple propaganda in the Internet,
that there will be bomb attacks during the holidays can be considered cyberterrorism. As well
there are also hacking activities directed towards individuals, families, organised by groups
within networks, tending to cause fear among people, demonstrate power, collecting information
relevant for ruining peoples' lives, robberies, blackmailing etc.
[edit] Cyber warfare
Main article: Cyber warfare
The U.S. Department of Defense (DoD) notes that cyberspace has emerged as a national-level
concern through several recent events of geo-strategic significance. Among those are included
the attack on Estonia's infrastructure in 2007, allegedly by Russian hackers. "In August 2008,
Russia again allegedly conducted cyber attacks, this time in a coordinated and synchronized
kinetic and non-kinetic campaign against the country of Georgia. Fearing that such attacks may
become the norm in future warfare among nation-states, the concept of cyberspace operations
impacts and will be adapted by warfighting military commanders in the future.[5]
Cyberlaw
From Wikipedia, the free encyclopedia
Cyberlaw
File sharing
Legal aspects of hyperlinking
and framing
Spamming
Cyberlaw is a term that encapsulates the legal issues related to use of communicative,
transactional, and distributive aspects of networked information devices and technologies. It is
less a distinct field of law than property or contract law, as it is a domain covering many areas of
law and regulation. Some leading topics include intellectual property, privacy, freedom of
expression, and jurisdiction.
Hacker (computer security)
From Wikipedia, the free encyclopedia
This article is about computer security hackers. For other types of computer hackers, see Hacker
(computing). For other uses, see Hacker (disambiguation).
This article's tone or style may not be appropriate for Wikipedia. Specific concerns may be found
on the talk page. See Wikipedia's guide to writing better articles for suggestions. (May 2010)
Hobbyist hacker
Technology hacker
Hacker programmer
Computer security
Computer insecurity
Network security
History
Phreaking
Cryptovirology
Hacker ethic
Cybercrime
Hacking tools
Vulnerability
Exploit
Payload
Software
Malware
Rootkit, Backdoor
Trojan horse, Virus, Worm
Spyware, Botnet, Keystroke logging
Antivirus software, Firewall, HIDS
In common usage, a hacker is a person who breaks into computers and computer networks,
either for profit or motivated by the challenge.[1] The subculture that has evolved around hackers
is often referred to as the computer underground but is now an open community.[2]
Other uses of the word hacker exist that are not related to computer security (computer
programmer and home computer hobbyists), but these are rarely used by the mainstream media
because of the common stereotype that is in TV and movies. Before the media described the
person who breaks into computers as a hacker there was a hacker community. This group was a
community of people who had a large interest in computer programming, often sharing, without
restrictions, the source code for the software they wrote. These people now refer to the cyber-
criminal hackers as "crackers"[3].
Social engineering
Main article: Social engineering (computer security)
Social Engineering is the art of getting persons to reveal sensitive information about a system.
This is usually done by impersonating someone or by convincing people to believe you have
permissions to obtain such information.
n common usage, a hacker is a person who breaks into computers and computer networks, either
for profit or motivated by the challenge.[1] The subculture that has evolved around hackers is
often referred to as the computer underground but is now an open community.[2]
Other uses of the word hacker exist that are not related to computer security (computer
programmer and home computer hobbyists), but these are rarely used by the mainstream media
because of the common stereotype that is in TV and movies. Before the media described the
person who breaks into computers as a hacker there was a hacker community. This group was a
community of people who had a large interest in computer programming, often sharing, without
restrictions, the source code for the software they wrote. These people now refer to the cyber-
criminal hackers as "crackers"[3].
Social engineering is the act of manipulating people into performing actions or divulging
confidential information, rather than by breaking in or using technical cracking techniques;
essentially a fancier, more technical way of lying.[1] While similar to a confidence trick or simple
fraud, the term typically applies to trickery or deception for the purpose of information gathering,
fraud, or computer system access; in most cases the attacker never comes face-to-face with the
victim.
All social engineering techniques are based on specific attributes of human decision-making
known as cognitive biases.[2] These biases, sometimes called "bugs in the human hardware," are
exploited in various combinations to create attack techniques, some of which are listed here:
IP address
An Internet Protocol address (IP address) is a numerical label that is assigned to any device
participating in a computer network that uses the Internet Protocol for communication between
its nodes.[1] An IP address serves two principal functions: host or network interface identification
and location addressing. Its role has been characterized as follows: "A name indicates what we
seek. An address indicates where it is. A route indicates how to get there."[2]
The designers of TCP/IP defined an IP address as a 32-bit number[1] and this system, known as
Internet Protocol Version 4 (IPv4), is still in use today. However, due to the enormous growth of
the Internet and the predicted depletion of available addresses, a new addressing system (IPv6),
using 128 bits for the address, was developed in 1995[3], standardized by RFC 2460 in 1998,[4]
and is in world-wide production deployment.
Although IP addresses are stored as binary numbers, they are usually displayed in human-
readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for
IPv6).
The Internet Protocol is used to route data packets between networks; IP addresses specify the
locations of the source and destination nodes in the topology of the routing system. For this
purpose, some of the bits in an IP address are used to designate a subnetwork. The number of
these bits is indicated in CIDR notation, appended to the IP address; e.g., 208.77.188.166/24.
As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918
set aside a group of private address spaces that may be used by anyone on private networks. Such
networks require network address translator gateways to connect to the global Internet.
The Internet Assigned Numbers Authority (IANA) manages the IP address space allocations
globally and cooperates with five regional Internet registries (RIRs) to allocate IP address blocks
to local Internet registries (Internet service providers) and other entities.
History
The Uniform Resource Locator was created in 1994[2] by Tim Berners-Lee, Marc Andreessen,
Mark P. McCahill, Alan Emtage, Peter J. Deutsch and Jon Postel, as part of the URI.[3] Berners-
Lee regrets the use of dots to separate the route to the server in the URI, and wishes he had used
slashes throughout.[4] For example, http://www.serverroute.com/path/to/file.html would
look like http:com/serverroute/www/path/to/file.html. Berners-Lee has also admitted
that the two forward slashes before the server route were unnecessary.[5]
Internet
This article is about the public worldwide computer network system. For other uses, see Internet
(disambiguation).
Visualization of the various routes through a portion of the Internet. From 'The Opte Project'
The Internet is a global system of interconnected computer networks that use the standard
Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks
that consists of millions of private, public, academic, business, and government networks, of
local to global scope, that are linked by a broad array of electronic and optical networking
technologies. The Internet carries a vast range of information resources and services, such as the
inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to
support electronic mail.
Most traditional communications media including telephone, music, film, and television are
being reshaped or redefined by the Internet. Newspaper, book and other print publishing are
having to adapt to Web sites and blogging. The Internet has enabled or accelerated new forms of
human interactions through instant messaging, Internet forums, and social networking. Online
shopping has boomed both for major retail outlets and small artisans and traders. Business-to-
business and financial services on the Internet affect supply chains across entire industries.
The origins of the Internet reach back to the 1960s with both private and United States military
research into robust, fault-tolerant, and distributed computer networks. The funding of a new
U.S. backbone by the National Science Foundation, as well as private funding for other
commercial backbones, led to worldwide participation in the development of new networking
technologies, and the merger of many networks. The commercialization of what was by then an
international network in the mid 1990s resulted in its popularization and incorporation into
virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's
population used the services of the Internet.
The Internet has no centralized governance in either technological implementation or policies for
access and usage; each constituent network sets its own standards. Only the overreaching
definitions of the two principal name spaces in the Internet, the Internet Protocol address space
and the Domain Name System, are directed by a maintainer organization, the Internet
Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and
standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering
Task Force (IETF), a non-profit organization of loosely affiliated international participants that
anyone may associate with by contributing technical expertise.
E-mail attachment
An e-mail attachment (or email attachment) is a computer file sent along with an e-mail message. One
or more files can be attached to any email message, and be sent along with it to the recipient. This is
typically used as a simple method to share documents and images.
Current usage
[edit] Size limits
The e-mail standards such as MIME don't specify any file size limits, but in practice e-mail users
will find that they can't send very large files.
Over the internet a message will often pass through several mail transfer agents to reach the
recipient. Each of these has to store the message before forwarding it on, and may therefore need
to impose size limits. The result is that while large attachments may succeed internally within a
company or organization, they are unreliable when sending across the Internet - and for that
reason sending systems often arbitrarily limit the size their users are allowed to submit[1]. As an
example, when Google's gmail service increased its arbitrary limit to 20MB it warned that:
"...you may not be able to send larger attachments to contacts who use other email services with
smaller attachment limits...."[2][3].
Email users can be puzzled by these limits because the MIME encoding adds up to 30%
overhead[4] - so that a 20MB document on disk exceeds a 25MB file attachment limit.
Current usage
[edit] Size limits
The e-mail standards such as MIME don't specify any file size limits, but in practice e-mail users
will find that they can't send very large files.
Over the internet a message will often pass through several mail transfer agents to reach the
recipient. Each of these has to store the message before forwarding it on, and may therefore need
to impose size limits. The result is that while large attachments may succeed internally within a
company or organization, they are unreliable when sending across the Internet - and for that
reason sending systems often arbitrarily limit the size their users are allowed to submit[1]. As an
example, when Google's gmail service increased its arbitrary limit to 20MB it warned that:
"...you may not be able to send larger attachments to contacts who use other email services with
smaller attachment limits...."[2][3].
Email users can be puzzled by these limits because the MIME encoding adds up to 30%
overhead[4] - so that a 20MB document on disk exceeds a 25MB file attachment limit.
"The Web" redirects here. For other uses, see Web (disambiguation).
Company CERN
Available? Worldwide
The World Wide Web, abbreviated as WWW and commonly known as the Web, is a system of
interlinked hypertext documents accessed via the Internet. With a web browser, one can view
web pages that may contain text, images, videos, and other multimedia and navigate between
them by using hyperlinks. Using concepts from earlier hypertext systems, English engineer and
computer scientist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium,
wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] At
CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau
proposed in 1990 to use "HyperText [...] to link and access information of various kinds as a web
of nodes in which the user can browse at will",[2] and publicly introduced the project in
December.[3]
"The World-Wide Web (W3) was developed to be a pool of human knowledge, and human
culture, which would allow collaborators in remote sites to share their ideas and all aspects of a
common project." [4]
Blog
A blog (a blend of the term web log)[1] is a type of website or part of a website. Blogs are usually
maintained by an individual with regular entries of commentary, descriptions of events, or other
material such as graphics or video. Entries are commonly displayed in reverse-chronological
order. Blog can also be used as a verb, meaning to maintain or add content to a blog.
Most blogs are interactive, allowing visitors to leave comments and even message each other via
widgets on the blogs and it is this interactivity that distinguishes them from other static websites.
[2]
Many blogs provide commentary or news on a particular subject; others function as more
personal online diaries. A typical blog combines text, images, and links to other blogs, Web
pages, and other media related to its topic. The ability of readers to leave comments in an
interactive format is an important part of many blogs. Most blogs are primarily textual, although
some focus on art (Art blog), photographs (photoblog), videos (video blogging), music (MP3
blog), and audio (podcasting). Microblogging is another type of blogging, featuring very short
posts.
As of December 2007, blog search engine Technorati was tracking more than 112,000,000 blogs.
[3]
Online chat
Online chat can refer to any kind of communication over the Internet, but is primarily meant to
refer to direct one-on-one chat or text-based group chat (formally also known as synchronous
conferencing), using tools such as instant messengers, Internet Relay Chat, talkers and possibly
MUDs. The expression online chat comes from the word chat which means "informal
conversation".[1]
Ethernet
Ethernet is a family of frame-based computer networking technologies for local area networks
(LANs). The name came from the physical concept of the ether. It defines a number of wiring
and signaling standards for the Physical Layer of the OSI networking model as well as a
common addressing format and Media Access Control at the Data Link Layer.
Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet
for connecting end systems to the network, along with the fiber optic versions for site backbones,
is the most widespread wired LAN technology. It has been used from around 1980[1] to the
present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET.
view
Last modified: Sunday, September 01, 1996
</NOSCRIPT>
In database management systems, a view is a particular way of looking at a database. A single database
can support numerous different views. Typically, a view arranges the records in some order and makes
only certain fields visible. Note that different views do not affect the physical organization of the
database.
Word Processing
Microsoft Works 4.0 (Macintosh)