100% found this document useful (2 votes)
558 views199 pages

Learn Coding 2 Books in 1 A Practical Guide To Learn Python and SQL

Uploaded by

ANWAR
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
100% found this document useful (2 votes)
558 views199 pages

Learn Coding 2 Books in 1 A Practical Guide To Learn Python and SQL

Uploaded by

ANWAR
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 199

LEARN CODING

2 Books in 1:
A Practical Guide to Learn Python
and SQL. Discover the Secrets of
Programming and Avoid Common
Mistakes. Exercises Included

Jason Crash
Easy Navigator

Learn Python Programming


SQL

© Copyright 2020 – Jason Crash - All rights reserved.


The content contained within this book may not be reproduced, duplicated
or transmitted without direct written permission from the author or the
publisher.
Under no circumstances will any blame or legal responsibility be held
against the publisher, or author, for any damages, reparation, or monetary
loss due to the information contained within this book. Either directly or
indirectly.
Legal Notice:
This book is copyright protected. This book is only for personal use. You
cannot amend, distribute, sell, use, quote or paraphrase any part, or the
content within this book, without the consent of the author or publisher.
Disclaimer Notice:
Please note the information contained within this document is for
educational and entertainment purposes only. All effort has been executed
to present accurate, up to date, and reliable, complete information. No
warranties of any kind are declared or implied. Readers acknowledge that
the author is not engaging in the rendering of legal, financial, medical or
professional advice. The content within this book has been derived from
various sources. Please consult a licensed professional before attempting
any techniques outlined in this book.
By reading this document, the reader agrees that under no circumstances is
the author responsible for any losses, direct or indirect, which are incurred
as a result of the use of information contained within this document,
including, but not limited to, — errors, omissions, or inaccuracies.

Learn Python Programming

A Practical Introduction Guide for Python


Programming. Learn Coding Faster with
Hands-On Project. Crash Course
Table of Contents
Introduction
Chapter 1: Python
Why Should You Learn Computer Programming?
Advantages of Modern Programming
Why Did Python Emerge as a Winner Among a Lot of Other Computer
Languages?
What Is Python (and a Little Bit of History)
Chapter 2: Importance of Python
What Can You Do as a Python Programmer?
Example Program to Just Get You a Good Overview of the Python
Programming
Python Reserved Words
Chapter 3: How to Install Python
Installation and Running of Python
Official Version Installation
Other Python Versions
Virtualenv
Chapter 4: The World of Variables
Chapter 5: Data Types in Python
Basic Data Types
Tuples and Lists
Dictionaries
Chapter 6: Operators in Python
Mathematical Operators
String
Comparison Operator
Logical Operators
Operator Precedence
Chapter 7: Execution and Repetitive Tasks
If Structure
Stand Back
If Nesting and Elif
For Loop
For Element in Sequence
While Loop
Skip or Abort
Small Exercise to Review What We Learned Until Now
Chapter 8: Functions and Modules
What Are Functions?
Defining Functions
How to Call Functions?
Function Documentation
Parameter Passing
Basic Pass Parameters
Pass the Parcel
Unwrap
Recursion
GAUSS and Mathematical Induction
Proof of the Proposition
Function Stack
Scope of Variables
inner_var()
Introducing Modules
Search Path
Installation of Third-Party Modules
Chapter 9: Reading and Writing Files in Python
Storage
Documents
Context Manager
Pickle Pack
Chapter 10: Object-Oriented Programming Part 1
Classes
Objects
Successors
Subclasses
Attribute Overlay
What You Missed Out on All Those Years
List Objects
Tuples and String Objects
Chapter 11: Object-Oriented Programming Part 2
Operators
Element References
Just a Small Example for Dictionary Datatype
Implementation of Built-In Functions
Attribute Management
Features
_getatr_() method
Dynamic Type
Mutable and Immutable Objects
Look at the Function Parameter Passing from the Dynamic Type
Memory Management in Python
1. Reference Management
2. Garbage Collection
Chapter 12: Exception Handling
What Is a Bug?
Debugging
Exception Handling in Detail
Chapter 13: Python Web Programming
HTTP Communication Protocol
http.client Package
Conclusion
Introduction
Congratulations on purchasing Learn Python Programming , and thank you
for doing so!
The following chapters will discuss Python programming in detail, with a
well-versed example that will help you get a better understanding of
different programming concepts with the help of Python. You’ve taken the
first step to learning a programming language that is famous for its
robustness and simplicity.
Taking Python as an example, this book not only introduces the basic
concepts of programming but also focuses on the programming language
paradigm (process-oriented, object-oriented, function-oriented), as well as
the programming language paradigm in Python. This way, the reader not
only learns Python but also will have an easier time learning about other
programming languages in the future.
The appearance of computer hardware performance has developed by leaps
and bounds. At the same time, programming languages have also undergone
several changes, resulting in a variety of programming paradigms. Python,
with its simplicity and flexibility, has made its way to software industries in
spite of many programming languages. Throughout history, we have
experienced not only the features of Python but also the main points that the
language is meant to address.
Computing has a long history dating back to thousands of years ago. People
can calculate and remember—but what’s even more remarkable is their
ability to borrow tools. Humans have long used methods and tools to aid in
highly complex cognitive tasks such as computation and memory. By tying
knots in ropes to record cattle and sheep in captivity, our ancestors had long
been able to use the abacus at dizzying speeds. With the development of
modern industrialization, the social demand for computation is more and
more intense. Taxes need to be calculated, machines need to be built, and
canals need to be dug. New computing tools are emerging. Using the
principle of a logarithm, people made a slide rule. The slide rule can be
moved in parallel to calculate multiplication and division. Charles Babbage,
a 19th Century Englishman, designed a machine that used a combination of
gears to make highly accurate calculations, hinting at the arrival of machine
computing. At the beginning of the 20th century, there were
electromechanical computing machines. The electric motor drives the shift
gears to “squeak” until the calculation is made.

During World War II, the war stimulated the need for computing in society.
Weapon design requires calculations, such as the design of a tank or the
outer hull of a submarine trajectory. The militarization of society requires
calculations, such as train scheduling, resource allocation, and population
mobilization. As for the missiles and high-tech projects like nuclear bombs,
they need massive amounts of computing. Computing itself could even
become a weapon. It’s worth noting that it was Alan Turing who came up
with the idea of a universal computer theoretical concept for the future
development of the computer to make a theoretical preparation. The top
prize in computer science is now named after Turing in honor of his great
service. The Z3 computer, invented by the German engineer Konrad Zuse,
can write programs. This invention made the world all set for the evolution
of the modern computer.
The most commonly thought of computers are desktops and laptops. In fact,
the computer also exists in smartphones, cars, home appliances, and other
devices. However, no matter how variable the shape, these computers all
follow the von Neumann structure. But in the details, there are big
differences between computers. Some computers use a multi-level cache,
some have only a keyboard without a mouse, and some have tape storage.
The hardware of a computer is a very complicated subject. Fortunately,
most computer users don’t have to deal directly with the hardware. This is
due to the operating system (OS).
An operating system is a set of software that runs on a computer and
manages its hardware and software resources. Both Microsoft’s Windows
and Apple’s IOS are operating systems. When we program, most of the
time it’s through the operating system, which is the middleman to deal with
the hardware. The operating system provides a set of system calls, which is
what the operating system supports. When a system call is called, the
computer performs the corresponding operation—alike with pressing a key
on a piano, and the piano produces a corresponding rhythm. The operating
system, therefore, defines a number of library functions to group system
calls to compose a particular function, like a chord made up of several tones
—and by programming, we take all these functions and libraries to create
beautiful music that is useful.
There are plenty of books on this subject on the market—thanks again for
choosing this one! Every effort was made to ensure it is full of as much
useful information as possible. Please enjoy!
Chapter 1: Python
This chapter gives a brief introduction to why programming is needed,
along with a short introduction about Python.

Why Should You Learn Computer Programming?

Learning any programming language, including Python, opens the door to


the computer world. By programming, you can do almost everything a
computer can do—giving you plenty of room to be creative. If you think of
a need—say, to count the word frequency in a Harry Potter novel—you can
program it yourself. If you have a good idea, such as a website for mutual
learning, you can open up your computer and start writing. Once you learn
to program, you’ll find that software is mostly about brains and time, and
everything else is extremely cheap. There are many rewards to be gained
for writing programs. It could be an economic return, such as a high salary
or starting a publicly traded internet company. It can also be a reputational
reward, such as making programming software that many people love or
overcoming problems that plague the programming community. As hackers
and painters put it, programmers are as much creators as painters. Endless
opportunities for creativity are one of the great attractions of programming.
Programming is the basic form of human-machine interaction. People use
programs to operate machines. Starting with the industrial revolution of the
18th century, people gradually moved away from the mode of production of
handicrafts and towards the production of machines. Machines were first
used in the cotton industry, and the quality of the yarn produced at the
beginning was inferior to that produced by hand. However, the machines
can work around the clock, tirelessly, and in prodigious quantities. Hence,
by the end of the 18th century, most of the world’s cotton had become
machine-made. Today, machines are commonplace in our lives. Artificial
intelligence is also penetrating more and more into production and life.
Workers use machines to make phones and other devices, doctors operate
with machines to perform minimally invasive surgery, and traders use
machines to trade high-frequency stocks. To put it cruelly, the ability to
deploy and possess machines will replace bloodlines and education as the
yardstick of class distinctions in the future. This is why programming
education is becoming more and more important.
Changes in the world of machines are revolutionizing the way that the
world works. Repetitive work is dead, and the need for programmers is
growing. A lot of people are teaching themselves to program to keep up
with the latest trends. Fortunately, programming is getting easier. From
assembly language to C to Python, programming languages are becoming
more accessible. In Python, for example, a functional implementation
requires only a few interface calls and does not require much effort with the
support of a rich set of modules. The encapsulation we talked about later is
also about packaging the functionality into a canonical interface that makes
it easy for others to use. Programming with a precision machine provides
the public with a standardized interface. Whether that interface is a fast and
secure payment platform or a simple and fast booking site, this
encapsulation and interface thinking is reflected in many aspects of social
life. Learning to program, therefore, is also a necessary step in
understanding contemporary life.

Advantages of Modern Programming


Programming is always calling out the basic instructions of the computer.
The code would be incredibly verbose if the entire operation were explained
in terms of the basic instructions. In his autobiography, former I.B.M.
President John Watson Jr. says he saw an engineer who wanted to do
multiplication calculations using punch cards stacked up to 1.2 meters tall.
Fortunately, programmers have come to realize that many specific
combinations of instructions are repeated. If you can reuse this code in your
program, you can save a lot of work. The key to reusing code is called
encapsulation.
“Packaging” is the process of packaging an instruction that performs a
particular function into a block and giving it a name that is easy to query. If
you need to reuse this block, you can simply call it by name. It would be
like asking the chef to make a “Chicken Burger” without specifying how
much meat, how much seasoning, and how long it takes to cook. The
operating system mentioned earlier is used to encapsulate some of the
underlying hardware operations for the upper application to call. Of course,
encapsulation comes at a cost, as it consumes computer resources. If you’re
using an early computer, the process of encapsulation and invocation can be
time-consuming and ultimately not worth the effort.
There are many ways to encapsulate code. Programmers write programs in
a specific style, depending on the way they are written Such as process-
oriented programming, object-oriented programming, and functional
programming. In more rigorous terms, each style of programming is one
Programming Paradigm. Programming languages began to distinguish
between camps based on programming paradigms as process-oriented C
language, object-oriented Java language, function-oriented Lisp language,
etc. A program that is written in any programming paradigm will eventually
translate into the simple combination of functions described above. So
programming requirements can always be implemented through a variety of
programming paradigms.
Now, the only difference is the convenience of the paradigm. Due to the
pros and cons of different paradigms, many modern programming
languages support a variety of programming paradigms, allowing
programmers to choose between them. Python is a multi-paradigm
language.
The programming paradigm is a major obstacle to learning programming. If
a programmer is familiar with a programming paradigm, then he can easily
learn other programming languages in the same paradigm. You know, for a
newbie, learning Python in a multi-paradigm language, you will find
different implementation styles for the same function, and you will be
puzzled. In some college computer science courses, Cheng, chose to teach
each of the typical paradigm languages, such as C, Java, and Lisp, so that
students could learn other languages in the future. But doing so can drag on
the learning process. It seems to me that a multi-paradigm language like
Python provides an opportunity to learn a variety of programming
paradigms in contrast. In the same language framework, if the programmer
can clearly distinguish between the same programming paradigm, and
understand the pros and cons of each respectively, it will cause a multiplier
effect. And that’s what this book is trying to do—from the aspect, the three
main paradigms are the procedure, object-oriented, and functional. Learn
Python three times in one book. By learning Python from this book, you
will not only learn a Python language, but you’ll be able to lay the
groundwork for learning other languages in the future.

Why Did Python Emerge as a Winner Among a


Lot of Other Computer Languages?
The key to high-level languages is encapsulation, which makes
programming easy. Python became one of the major programming
languages because it was good at this. Python is widely used and is
Google’s third-largest development language. It is also the main language
used by Dropbox, Quora, Pinterest, Reddit, and other sites. In many
scientific fields, such as mathematics, artificial intelligence, bioinformatics,
and astrophysics, Python is gaining ground and has a good contribution
from programmers all around the globe in GitHub.
Of course, Python's success didn’t happen overnight. It has experienced two
or three decades of development since its birth. Looking back at the history
of Python, we can not only understand the history of Python but also
understand the philosophy of it.

What Is Python (and a Little Bit of History)


Python was written by Guido van Rossum, who was Dutch. In 1982, he
received a master’s degree in mathematics and computer science from the
University of Amsterdam. However, although he was a mathematician, he
enjoyed the fun of computers even more. In his words, despite his dual
aptitude for mathematics and computers, he has always intended to do
computer-related work and is keen to do any programming related work.
Before writing Python, Rossum was exposed to and used languages such as
Pascal, C, and Fortran. The focus of these languages is to make programs
run faster. In the 1980s, though IBM and Apple had already started a wave
of personal computers, the configuration of these personal computers
seemed to be very low. Early macs had only 8 MHZ OF CPU power and
128 kb of memory, and a slightly more complex operation could cause a
computer to crash. Therefore, the core of programming at that time was
optimization, so that the program can run smoothly under the limited
hardware performance. To be more efficient, programmers have to think
like computers so that they can write programs that are more machine-
friendly. They want to squeeze every bit of computer power out of their
hands. Some even argue that C pointers are a waste of memory. As for the
high-level features we now routinely use in programming, such as dynamic
typing, automatic memory management, object-orientation, and so on, in
those days’ computers would simply crash.
However, Rossum fretted about programming with performance as its sole
focus. Even if he had a clear idea in his head of how to implement a
function in C, the whole writing process would still take a lot of time.
Rossum preferred Shell to the C language. Unix system administrators often
used the Shell to write simple scripts to do some system maintenance work,
such as regular backup, file system management, and so on. The Shell can
act as a glue that ties together many of the features under UNIX. Many
programs with hundreds of lines in C can be done in just a few lines in the
Shell. However, the essence of the Shell is to invoke commands, and it is
not a real language. For example, Shell has a single data type and complex
operations. In short, the Shell is not a good general-purpose programming
language.
Rossum wanted a general-purpose programming language that could call all
the functional interfaces of a computer like C and program as easily as a
Shell. The first thing that gave Rossum hope was the ABC language. The
ABC language was developed by the INFORMATICA in the Netherlands.
This institute was where Rossum worked, so Rossum was involved in the
development of the ABC language. The purpose of the ABC language was
teaching. Unlike most languages of the time, the goal of ABC was “to make
users feel better.” ABC language hopes to make the language easy to read,
easy to use, easy to remember, and easy to learn, in order to stimulate
people’s interest in learning programming.
Despite the readability and ease of use, the ABC language did not catch on.
At the time, the ABC language compiler required a relatively high-profile
computer to run. In those days, high-powered computers were a rarity, and
their users tended to be computer savvy already. These people were more
concerned with the efficiency of the program than with the difficulty of
learning the language. In addition to performance, the design of the ABC
language suffered from a number of fatal problems:
Poor extensibility. The ABC language is not a modular language.
If you want to add features to the ABC language, such as
graphical support, you have to change a lot of things.

No direct input/output. The ABC language cannot directly


manipulate the file system. Although you can import data, such
as a text stream, the ABC language cannot read or write files
directly. The difficulty of input and output is fatal to computer
languages. Can you imagine a sports car that can’t open its
doors?

Excessive innovation. The ABC language uses natural language


to express the meaning of a program, such as a how-to in the
program above. However, for programmers with multiple
languages, they are more likely to use a function or to define a
function. Similarly, programmers are used to assigning variables
with equal signs (=). Innovation, while it makes the ABC
language special, actually makes it harder for programmers to
learn.
Transmission difficulty. The ABC compiler is large and must be
saved on tape. When Rossum was communicating academically,
he had to use a large tape to install the ABC compiler for
someone else. This makes it difficult for the ABC language to
spread quickly.

In 1989, to pass the Christmas holiday, Rossum began writing a compiler


interpreter for Python. The word Python means Python in English. But
Rossum chose the name not because of the boa constrictor, but because of a
beloved television series. He hopes the new language, called Python, will
fulfill his vision. It’s a full-featured, easy-to-learn, easy-to-use, and
extensible language between C and Shell. As a language design enthusiast,
Rossum experimented with design languages. The language design didn’t
work the last time, but Rossum enjoyed it. This time he designed the Python
language, too.
Chapter 2: Importance of Python
In 1991, the first Python compiler or interpreter was born. It is implemented
in C language and can call the dynamic link library generated by C
language. From the time it was born, Python had the basic syntax it still has
today: classes, functions, exceptions, core data types including lists and
dictionaries, and a module-based extension system.

What Can You Do as a Python Programmer?


Much of the Python syntax comes from C but is strongly influenced by the
ABC language. Like the ABC language, Python uses indentation instead of
curly braces, for example, to make the program more readable. According
to Rossum, programmers spend far more time reading code than writing it.
Forced indenting makes your code easier to read and should be retained.
But unlike the ABC language, Rossum also values practicality. While
ensuring readability, Python deftly obeys some grammatical conventions
that already exist in other languages. Python uses equal sign assignment,
which is consistent with most languages. It uses def to define functions,
rather than the Esoteric How-to of the ABC language. Rossum argues that
there is no need to be overly innovative if it is established by “common
sense.”
Python also took a special interest in extensibility, which is another
embodiment of Rossum’s pragmatic principle. Python can be extended at
many levels. At a high level, you can extend the functionality of your code
by importing Python files written by others. You can also directly import C
and C++ compiled libraries for performance reasons. Thanks to years of
coding in C and C++, Python stands on the shoulders of a giant. Python is
like building a house out of steel, with a large framework and a modular
system to give programmers free rein.
The original Python was entirely developed by Rossum himself. Because
Python hides many machine-level details and highlights logical level
programming thinking, this easy to use language was welcomed by
Rossum’s colleagues. These colleagues, many of whom were involved in
improving the language, were happy to use Python at work and then gave
Rossum feedback on how to use it. Rossum and his colleagues made up the
core Python team and devoted most of their spare time to Python. Python
also gradually spread from Rossum’s circle of colleagues to other scientific
institutions, slowly used outside the academic community for program
development.
The popularity of Python has been linked to significant improvements in
computer performance. In the early 1990s, personal computers were
introduced into a normal family. Intel released the 486 processor, which
represents the fourth generation of processors. In 1993, Intel launched a
better Pentium processor. The performance of the computer has been
greatly improved. Programmers don’t have to work so hard to make their
programs more efficient. More and more attention is paid to the ease of use
of computers. Microsoft has launched Windows 3.0 with a series of
Windows systems that are easy to use graphically. The interface attracted a
large number of regular users. Languages that can quickly produce
software, such as those running on virtual machines, are the new stars, like
Java on a windows phone. Java is entirely based on an object-oriented
programming paradigm that can increase program productivity at the
expense of performance. Python is a step behind Java, but its ease of use is
also up to date. As I said earlier, the ABC language is a failure. One
important reason is the performance limitations of the hardware. In this
respect, Python is much luckier than the ABC language.
Another quiet change is the Internet. In the 1990s, during the era of the
personal computer, Microsoft and Intel dominated the PC market, almost
monopolizing it. At the time, the information revolution hadn’t arrived, but
for coders close to home, the Internet was the tool of the day. Programmers
are the first to use the Internet for communication, such as e-mail and
newsgroups. The Internet has made it much cheaper to exchange
information, and it has allowed people with similar interests to come
together across geographic boundaries. Based on the communications
capabilities of the Internet, Open Source software development models
have become popular. Programmers spend their spare time developing
software and opening up source code. In 1991, Linus Torvalds's release of
the Linux kernel source on the Minix newsgroup attracted a large number of
coders to join the development effort and led the open-source movement.
Linux and GNU work together to form a vibrant open-source platform.
Rossum, himself an open-source pioneer, maintains a mailing list and
places early Python uses inside. Early Python users are able to
communicate in groups via email. Most of these users are programmers,
and they have pretty good development skills. They come from many fields,
have different backgrounds, and have various functional requirements for
Python.
Because Python is so open and easy to extend, it’s easy for a person not to
be satisfied with the existing functionality to extend or transform. These
users then send the changes to Rossum, who decide whether to add the new
feature to Python China. It would be a great honor if the code could be
adopted. Rossum’s own role is increasingly framed.
If the problem is too complex, Rossum will cut the corner and leave it to the
community. Let someone else handle it. Even things like creating websites
and raising funds are taken care of. The community is maturing, and the
development of the work is divided up among the whole community.
One idea for Python is to have a battery included. That said, Python already
has functional modules. The so-called module is that someone else who has
written a good Python program can achieve certain functions. A
programmer does not need to build wheels repeatedly, just refer to existing
modules. These modules include both Python’s own standard library and
third-party libraries outside of the standard library. These “batteries” are
also a contribution from the entire community. Python’s developers come
from different worlds, and they bring the benefits of different worlds to
Python and the regular expression reference in the Python Standard Library.
The syntax for functional programming refers to the LISP language, both of
which are contributions from the community. Python provides a rich arsenal
of weapons within a concise syntax framework. Whether it’s building a
website, creating an artificial intelligence program, or manipulating a
wearable device, it can be done with an existing library and short code. This
is probably the happiest place for a Python programmer.
Of course, Python has its share of pains. The latest version of Python is 3,
but Python 3 and Python 2 are incompatible. Since much of the existing
code is written in Python 2, the transition from version 2 to version 3 is not
easy. Many people have chosen to continue using Python 2. Some people
joke that Python 2’s version number will increase to in the future
2.7.31415926. In addition to the issue of versioning, Python’s performance
has been criticized from time to time. Python has low performance in C and
C++, and we’ll talk about why in this book. While Python is improving its
own performance, the performance gap will always be there. But judging
from Python’s history, a similar critique would be nitpicking. Python itself
is a trade performance for usability in the opposite direction of C and C++.
Just as a football striker is not a good goalkeeper, and it doesn’t make much
sense.
For starters, learning to program from Python has many benefits, such as
the simple syntax and rich modules mentioned above. Many foreign
universities of computer introduction courses have begun to choose Python
as the course language, replacing the common use of C or Java. But it is a
fantasy to think of Python as the “best language” and to want to learn
Python to be the “enemy of all.” Every language has its good points, but it
also has all sorts of flaws. A language has “good or bad” judgment but is
also subject to the platform, hardware, times, and other external reasons.
Furthermore, many development efforts require specific languages, such as
writing android apps in Java and Apple apps in objective-C or Swift. No
matter what language you start with, you will not end up with the language
you are just learning. It is only by gambling that the creativity of
programming is allowed to flourish.

Example Program to Show You a Good Overview


of the Python Programming
Python is easy to install, and you can refer to the next chapter. There are
two ways to run Python. If you want to try a few programs and see the
results immediately, you can run Python from the command line. The
command line is a small input field waiting for you to type on your
keyboard and speak directly to Python.
Start the command line, as shown in the next chapter, and you’re in Python.
Typically, the command line will have a text prompt to remind you to type
after the prompt. The Python statements you type are translated into
computer instructions by the Python interpreter. We now perform a simple
operation: let the computer screen display a line of words. Type the
following at the command-line prompt and press Enter on your keyboard to
confirm:
>>>print ("Oh, my name is Jesus Christ")

As you can see, when you click enter on the keyboard, the screen then
displays:
Oh, my name is Jesus Christ

The input print is the name of a function. The print() function has a specific
function, and the print() function simply prints characters on the screen. The
function is followed by a parenthesis that says the character you want to
print is, “Oh, my name is Jesus Christ.” The double quotation marks in
parentheses are not printed on the screen. The double quotation marks mark
out ordinary characters from program text such as print to avoid confusion
on the computer. You can also replace double quotes with a pair of single
quotes.
The second way to use Python is to write a Program File that the Python
Program file is written in. Py is the suffix, which can be created and written
with any text editor. The appendix describes the common text editor for
different operating systems. Create a file introduction.py, write the
following, and save it:
print("Oh, my name is Jesus Christ")

As you can see, the program content here is exactly the same as it was on
the command line. Compared with the command line, program files are
suitable for writing and saving a large number of programs.
Run the introduction.py, and you’ll see that Python also prints, “Hello
World! On the screen!” The contents of the program file are the same as
those typed on the command line and produce the same results. Program
files are easier to save and change than programs that are entered directly
from the command line, so they are often used to write large numbers of
programs.
Another benefit of program files is the ability to add comments. Comments
are words that explain a program and make it easier for other programmers
to understand it. Therefore, the content of the comment is not executed as a
program. In a Python program file, where every line starting with # is a
comment, we can annotate the hello.py.
print("Oh, my name is Jesus Christ") #Display those words on the screen

If you have too many comments to fit in a single line, you can use a
multiline comment.
""" Author: sample Function: Use this to display words """ print('Oh, my name is Jesus Christ')

The multi-line comment glyphs are three consecutive double quotation


marks. Multi-line comments can also use three consecutive single quotation
marks. Between the two sets of quotation marks is the content of the multi-
line comment.
Either the character you want to print or the text you want to annotate can
be in any other foreign language. If you use foreign languages in Python 2,
you need to add a line of encoding before the program begins to show that
the program file uses a utf-8 encoding that supports hundreds of languages.
This line of information is not required in Python 3.
# - *- coding: utf-8 - *-

So, we wrote a very simple Python program. Don’t underestimate the


program. In the process of implementing this program, your computer does
complicated work. It reads the program file, allocates space in memory,
performs many operations and controls, and finally controls the original
screen display to display a string of characters. The smooth running of this
program shows that the computer hardware, operating system, and language
compiler have been installed and set up. So, the first task a programmer
does is usually to print a bunch of text on the screen. Meet the Python
World for the first time and boom.

Python Reserved Words


Python has certain keywords that cannot be used by any variables because
they are distinguished to be used by Python significantly. There are a total
number of 33 reserved keywords—and, as, assert, break, def, and delete are
a few of the reserved keywords in Python 3.
Chapter 3: How to Install Python
This chapter deals with how to install Python in different operating systems
with clear-cut instructions so that newbies will not face any issues while
installing Python for the first time.

Installation and Running of Python


Official Version Installation
1 ) Mac
Python is already pre-installed on the Mac and can be used directly. If you
want to use other versions of Python, we recommend using the Homebrew
installation. Open the Terminal and enter the following command at a
command-line prompt, which brings you to Python’s Interactive Command
Line:
$python

The Python input above is usually a soft link to a version of a Python


command, such as version 3.5. If the corresponding version is already
installed, you can run it in the following manner:
$python3.5

The terminal will display information about Python, such as its version
number, followed by a command-line prompt for Python. If you want to
exit Python, type:
>>>exit()

If you want to run a Python Program in your current directory, append the
name to Python or Python 3:
$python installation.py

If the file is not in the current directory, you need to specify the full path of
the file, such as:
$python /home/authorname/installation.py

We can also change the installation.py to an executable script. Just add the
Python interpreter you want to use to the first line of the installation.py:
#!/usr/bin/env python
In the terminal, change the installation.py to executable:
$chmod installation.py

Then, on the command line, type the name of the program file, and you’re
ready to run it using the specified interpreter:
$./installation.py

If the installation.py is in the default path, then the system can


automatically search for the executable and run it in any path:
$installation.py

2 ) Linux
Linux systems are similar to MAC systems, and most come preloaded with
Python. There are many Linux systems that offer something like
Homebrew’s software manager, which, for example, is installed under
Ubuntu using the following command:
$sudo apt-get install python

Under Linux, Python is used and run in much the same way as on the MAC,
and I won’t go into that again.
3) Windows Operating System
For the Windows operating system, you need to download the installation
package from the official Python Web site. If you don’t have access to
Python’s web site, search engines for keywords like “Python Windows
download” to find other download sources. The installation process is
similar to installing other Windows software. In the install screen,
Customize the installation by selecting Customize, in addition to selecting
Python components, check:
Add python.exe to Path

Once installed, you can open the Windows command line and use Python as
you would on a Mac.

Other Python Versions


The official version of Python mainly provides compiler / interpreter
functionality. Other unofficial versions have richer features and interfaces,
such as a more user-friendly graphical interface, a text editor for Python, or
an easier to use module management system for you to find a variety of
extension modules. In unofficial Python, the two most commonly used are:
1) Anaconda
2) Thought Python Distribution (EPD)
Both versions are easier to install and use than the official version of
Python. With the help of a module management system, programmers can
also avoid annoying problems with module installation. So it’s highly
recommended for beginners. Anaconda is free, and EPD is free for students
and researchers. Because of the graphical interface provided, their use
method is also quite intuitive. I strongly recommend that beginners choose
one of these two versions to use. The exact usage can be found in the
official documentation and will not be repeated here.

Virtualenv
You can install multiple versions of Python on a single computer, and using
virtualenv creates a virtual environment for each version of Python. Here’s
how to install virtualenv using Python’s included pip.
$pip install virtualenv

You can create a virtual space for a version of Python on your computer,
such as:
$virtualenv –p /usr/bin/python3.5 virtualpythonexample

In the above command, /usr/bin/Python3.5 is where the interpreter is


located, and virtualPythonexample is the name of the newly created virtual
environment.
The following command can start using the MYENV virtual environment:
$source virtualpythonexample/bin/activate

To exit the virtual environment, use the following command:


$deactivate
Chapter 4: The World of Variables
This chapter will explain in detail about variables, which are essential for
any programming language and are very basic and important for the further
understanding of computer languages. This chapter explains variables in
detail, along with a lot of examples. Kindly follow the book while using
your computer.
The data that appears in few programs—whether it’s a number like 1 and
5.2, or a Boolean value like True and False—will disappear after the
operation due to various reasons of computer architecture, which is out of
the scope of our book. Sometimes, we want to store data in storage to reuse
in later programs. Each storage unit in computer memory has an address,
like a house number. We can store the data in a cubicle at a particular house
number, and then extract the previously stored data from the house number.
But indexing a stored address with a memory address is not convenient
because:
Memory addresses are verbose and hard to remember.

Each address corresponds to a fixed storage space size; it is


difficult to adapt to the type of variable data.

Before operating on an address, it is not known whether the


storage space for that address is already occupied.

With the development of programming language, it began to use variables


to store data. Variables, like memory addresses, also serve the function of
indexing data. When a variable is created, the computer creates storage
space in free memory to store the data, unlike memory addresses, the
amount of storage allocated varies depending on the type of variable. The
program gives the variable a variable name, which serves as an index to the
variable space in the program. The data is given to the variable, and the data
is extracted by the name of the variable as needed.
Consider this example:
exvariable = "Harry"
print(exvariable)
#prints out Harry.
The output is below:
Harry

In the above program, we pass the value (for example, 276) to the variable
exvariable, a process called Assignment. The assignment is represented by
an equal sign in Python. With the assignment, a variable is created. From a
hardware perspective, the process of assigning a value to a variable is the
process of storing the data in memory. A variable is like a small room in
which data can be stored. Its name is the house number. The assignment is
to send a guest to the room.
"Send Harry to Room exvariable"

In the subsequent addition operation, we extract the data contained by the


variable name exvariable and print it out with print(). A variable name is a
clue to the corresponding data in memory. With the memory function, the
computer won’t suffer from Amnesia.
Question: “Who’s in the exvariable room? “
Answer: “It’s Harry”
Hotel rooms will have different guests check in or check out. The same is
true of variables. We can assign one variable to another Value. This changes
the guest in the room.
exvariable = "Sachin"
print(exvariable)
# prints out "Sachin"

Output is:
Sachin

In computer programming, many variables are often set, with each variable
storing data for a different function. For example, a computer game might
record the number of different resources that a player has, and it would be
possible to record different resources with different variables.
Diamond = 50 # means 50 points
Bronze = 20 # means 20 points
Fire = 10 # means 10 points
During the game, you can add or subtract resources, depending on the
situation. For example, if a player chooses to cut bronze, he adds twenty
more points. At this point, you can add 5 to the corresponding variable.
bronze = bronze + 20
print(bronze)
#prints 40

Output is:
40

The computer first performs the operation to the right of the assignment
symbol. The original value of the variable is added to 20 and assigned to the
same variable. Throughout the game, the variable bronze plays a role in
tracking data. The player’s resource data is stored properly.
Variable names are directly involved in computation, which is the first step
towards abstract thinking. In mathematics, the substitution of symbols for
numbers is called Algebra. Many middle school students today will set out
Algebraic equations to solve such mathematical problems as “chicken and
rabbit in the same cage.” But in ancient times, Algebra was quite advanced
mathematics. The Europeans learned advanced Algebra from the Arabs,
using the symbolic system of Algebra to free themselves from the shackles
of specific numbers and to focus more on the relationship between logic
and symbols. The development of modern mathematics on the basis of
Algebra laid a foundation for the explosion of modern science and
technology. Variables also give programming a higher level of abstraction.
The symbolic representation provided by variables is the first step in
implementing code reuse. For example, the code used to calculate the
amount of cash needed to buy a house:
50000*(0.3 + 0.1)

When we look at more than one suite at a time, the price of 50000 changes.
For convenience, we can write the program as:
Sellingprice= 50000
needed= total *(0.3+ 0.1)
print(needed)
#prints the output

Output is:
20000

This way, you only need to change the value of 50000 each time you use
the program. Of course, we’ll see more ways to reuse code in the future.
But variables, which use abstract symbols instead of concrete numbers, are
representative.
Chapter 5: Data Types in Python
This chapter deals with datatypes that variables use. At the start, variables
had no data types. However, for the convenience of programmers, data
types have been implemented in different programming languages. Python
supports a lot of basic data types along with few advanced data types like
tuple and dictionary. We will have a detailed description of data types.

Basic Data Types


There may be many different types of data, such as integers like 53,
floating-point numbers like 16.3, Boolean values like True and False, and
the string “Hello World! “. In Python, we can assign various types of data to
the same variable. For example:
datatype_integer = 32
print(datatype_integer)
datatype_string = "My name is Jesus"
print(datatype_string)

As you can see, the value later assigned to the variable replaces the original
value of the variable. Python having the freedom to change the
characteristics of a variable type is called Dynamic Typing. Not all
languages support dynamic typing. In a Static Typing language, variables
have pre-specified types. A specific type of data must be stored in a specific
type of variable. Dynamic typing is more flexible and convenient than static
typing.
Even if you can change it freely, Python’s variables themselves have types.
We can use the function type() to see the type of the variable. For example:
datatype_integer = 64.34
print(type(datatype_integer))

The output is:


<class 'float'>

float is shorthand for Integer. In addition, there will be numbers (Int),


strings (String, str), and Boolean (Boolean, bool). These are the common
data types included in the Python programming language.
Computers need to store different types in different ways. Integers can be
represented directly as binary numbers, while floating-point numbers have
an extra record of the decimal point’s position. The storage space required
for each type of data is also different. The computer’s storage space is in
bits (bit) is a unit, each bit can store a 0 or 1 number. To record a Boolean
value, we simply have 1 representing the true value and 0 representing the
false value. So the store of Boolean values is only 1 bit. For the Integer 4,
convert to binary 100. In order to store it, there should be at least three bits
of storage, one, zero, and zero, respectively.
For efficiency and utility, a computer must have type storage in memory. In
a statically typed language, a new variable must specify a type, and so it is.
A dynamically typed language does not need to specify the type but leaves
the task of distinguishing the type to the interpreter. When we change the
value of a variable, the Python interpreter works hard to automatically
identify the type of new data and allocate memory space for that data.
Python interpreter’s thoughtful service makes programming easier, but it
also puts some of your computer’s capabilities to work with dynamic types.
That’s one reason Python isn’t as fast as statically typed languages like C.

Tuples and Lists


Some types of variables in Python can hold more than one piece of data as a
container. The sequence described in this section, both (Sequence), and the
Dictionary in the next section, are container-type variables. Let’s start with
the sequence. Like a line of soldiers, a sequence is an ordered collection of
data. A sequence contains a piece of data called an element of the sequence
(element). A sequence can contain one or more elements, or it can be an
empty sequence with no elements at all.
There are two types of sequences, Tuples and lists. The main difference
between the two is that once established, the elements of a tuple cannot be
changed anymore, while the list elements can be changed. So, tuples look
like a special kind of table with fixed data. Therefore, some translators also
refer to tuples as “set tables.” Tuples and tables are created as follows:
>>>tuplethisis = (65, 9.87, "church", 6.8, 7, True)
>>>listthisis= [False, 4, "laugh"]
>>>type(tuplethisis) # to get the output
>>>type(listthisis) # To get the output
As you can see, the same sequence can contain different types of elements,
which is also an embodiment of the Python dynamic type. Also, the element
of a sequence can be not only a primitive type of data but also another
sequence.
>>>nestlistthisis = [63,[43,84,35]]

Since tuples cannot change data, an empty tuple is rarely created.


Sequences can add and modify elements, so Python programs often create
empty tables:
>>>thisisempty = []

Since sequences are also used to store data, we cannot help but read the data
in the sequence. The elements in a sequence are ordered, so you can find
the corresponding elements based on their position in each element. The
positional index of a sequence element is called the subscript (Index). The
subscript for a sequence in Python starts at 0, which is the corresponding
subscript for the first element. There are historical reasons for this, in
keeping with the Classic C language. We try to reference elements in the
sequence:
The data in the table can be changed so that individual elements can be
assigned values. You can use the subscript to indicate which element you
want to target.

Dictionaries
In Python, there are also special data types called dictionaries. We will now
explain these data types in detail.
Dictionaries are similar to tables in many ways. It is also a container that
can hold multiple elements. But dictionaries are not indexed by location.
The dictionary allows you to index data in a custom way.
>>>thisisdictionary = {"harry":22, "Ron":37,"Hermoine":64}
>>>type(thisisdictionary) # To input the dictionary values

The dictionary contains multiple elements, each separated by a comma. The


dictionary element consists of two parts—a key and a value. The key is the
index of the data, and the value is the data itself. The key corresponds to the
value one by one. For example, in the example above, “Harry” corresponds
to 22, “Ron” corresponds to 37, and “Hermoine” corresponds to 64.
Because of the one-to-one correspondence between key values, dictionary
elements can be referenced by keys.
>>>thisisdictionary["Ron"]

Output is:
37

Modify or add an element’s value to the dictionary:


>>>thisisdictionary["hermoine"] = 57
>>>thisisdictonary["neiville"] = 75
Chapter 6: Operators in Python
Since it is called a “computer,” mathematical calculation, of course, is the
basic computer skill. The operations in Python are simple and intuitive.
Open up the Python Command Line, type in the following numeric
operation, and you’re ready to run it:

Mathematical Operators
1) Addition
>>>4 + 2

2) Subtraction
>>>4 - 2

3) Multiplication
>>>4 * 2

4) Division
>>>4 / 2

5) Remainder
>>>4 % 2

With these basic operations, we can use Python as if we were using a


calculator. Take buying a house. A property costs 20000 dollars and is
subject to a 5% tax on the purchase, plus a 10% down payment to the bank.
Then, we can use the following code to calculate the amount of cash to be
prepared:
>>>20000*(0.5+ 0.1)

In addition to the usual numeric operations, strings can also be added. The
effect is to concatenate two strings into one character.

String
Input:
>>>" I am a follower of " + "Christianity"
Output:
I am a follower of Christianity
Input:
>>>"Example" *2

Output:
ExampleExample

Multiplying a string by an integer n repeats the string n times.

Comparison Operator
Python uses comparison operators like ==, >, and < in its program. Below,
we will explain with an example.
Program code is below:
first = 34
second = 44
if ( first > second)
print "First one is larger"
else
print "Second one is larger"

Output is:
Second one is larger

Logical Operators
In addition to numerical operations, computers can also perform logical
operations. It’s easy to understand the logic if you’ve played a killing game
or enjoyed a detective story. Like Sherlock Holmes, we use logic to
determine whether a statement is true or false. A hypothetical statement is
called a proposition, such as “player a is a killer.” The task of logic is to
find out whether a proposition is true or false.
Computers use the binary system, where they record data in zeros and ones.
There are technical reasons why computers use binary. Many of the
components that make up a computer can only represent two states, such as
the on and off of a circuit, or the high and low voltages. The resulting
system is also relatively stable. If you use the decimal system, some
computer components will have 10 states, such as the voltage into 10 files.
That way, the system becomes complex and error-prone. In Binary Systems,
1 and 0 can be used to represent the true and false states. In Python, we use
the keywords True and False to indicate True and False. Data such as True
and False are called Boolean values.
Sometimes, we need further logical operations to determine whether a
complex proposition is true or false. For example, in the first round, I
learned that “player a is not a killer” is true, and in the second round, I
learned that “player B is not a killer” is true. So in the third round, if
someone says, “player a is not a killer, and player B is not a killer,” then
that person is telling the truth. If the two propositions connected by “and”
are respectively true, then the whole proposition is true. Virtually, we have a
“and” of the logical operation.
In the and operation, when both subpropositions must be true, the
compound proposition connected by and is true. The and operation is like
two bridges in a row. You must have both bridges open to cross the river, as
shown in figure 2-1. Take, for example, the proposition that China is in Asia
and Britain is in Asia. The proposition that Britain is in Asia is false, so the
whole proposition is false. In Python, we use and for the logical operation
of and.
>>>True and True # True
>>>False and True # false
>>>False and False # True

We can also compound the two propositions with “or.” Or is humbler than
an aggressive “and.” In the phrase “China is in Asia, or Britain is in Asia,”
for example, the speaker leaves himself room. Since the first half of this
sentence is true, the whole proposition is true. “Or” corresponds to the “or”
logic operation.
In the “or” operation, as long as there is a proposition for true, then “or”
connected to the compound proposition is true. The or operation is like two
bridges crossing the river in parallel. If either bridge is clear, pedestrians
can cross the river.
The above logic operation seems to be just some life experience and does
not need a computer such as complex tools. With the addition of a judgment
expression, a logical operation can really show its power.

Operator Precedence
If there is more than one operator in an expression, consider the precedence
of the operation. Different operators have different precedence. Operators
can be grouped in order of precedence. Below is the list of operator
precedence in an order.
Exponent powers have the highest precedence, followed by the
mathematical operator multiplication, division, addition, and subtraction.
And the next comes Bitwise operators followed by logical operators at the
end.
Chapter 7: Execution and Repetitive Tasks
This chapter deals with if-else statement and different types of loop
structures that help in repetitive tasks in programming.

If Structure
So far, the Python programs we’ve seen have been instruction-based. In a
program, computer instructions are executed sequentially. Instructions
cannot be skipped, nor repeated backward. The first programs were all like
this. For example, to make a light come on ten times, repeat ten lines of
instructions to make the light come on.
In order to make the program flexible, early programming languages added
the function of “jump.” With jump instructions, we can jump to any line in
the program during execution and continue down. For example, to repeat
execution, jump to a line that has already been executed. Programmers
frequently jump forward and backward in their programs, for convenience.
As a result, the program runs in a sequence that looks like a tangle of
noodles, hard to read and prone to error.
Programmers have come to realize that the main function of a jump is to
execute a program selectively or repeatedly. Computer experts have also
argued that with the grammatical results of “selection” and “loop,” “jump”
is no longer necessary. Both structures change the flow of program
execution and the order in which instructions are executed. Programming
languages have entered a structured age. Compared with the “Spaghetti
Program” brought about by the “jump,” the structured program becomes
pleasing to the eye. In modern programming languages, the “jump” syntax
has been completely abolished.
Let’s start with a simple example of a choice structure: if a house sells for
more than $200,000, the transaction rate 3% or 4%. We write a program
using a selection structure.
Below is the program code:
price = 340000
if
expectedprice> 200000:
fixedtax= 0.04
Else:
fixedtax= 0.03
print(fixedtax)
#prints 0.03

Output is:
0.04
In this process, there’s an if that we haven’t seen before. In fact, the
function of this sentence is easy to understand. If the total price exceeds
200,000, then the expected tax is 4%: otherwise, the transaction rate is 3%.
The keywords, if and else, each have a line of code attached to them, with a
four-space indentation at the beginning of the dependent code. The program
will eventually choose whether to execute the if dependent code or the else
dependent code, depending on whether the condition after the if holds. In
short, the if structure branches off from the program.
If and else can be followed by more than one line:
price = 340000
if
total > 200000:
Print ("rate is above 200,000")
fixedtax= 0.04
else:
Print ("rate is below 200,000")
fixedtax= 0.03
print(fixedtax)
# The result is 0.04

Output is:
rate is above 200,000

As you can see, code that is also an if or else has four spaces indented.
Keywords if and else are like two bosses, standing at the head of the line.
There’s a little brother standing back from the boss. The boss only by the
terms of winning, standing behind his younger brother, has a chance to
appear. The last line of the print statement also stands at the beginning of
the line, indicating that it is on an equal footing with if and else. The
program doesn’t need conditional judgment; it always executes this
sentence.
Else is not necessary; we can only write if program. For example:
price= 340000
if
price > 200000:
Print ("total price over $200,000")

Without else, it is effectively equivalent to an empty else. If the condition


after if doesn’t hold, then the computer doesn’t have to do anything.

Stand Back
Python features indentation to indicate the dependencies of your code. As
we show, the design for indenting code relationships is derived from the
ABC language. For comparison, let’s look at how C is written:
if ( price > 0 ) {
selling= 1;
buying= 2;
}

The program means that if the variable price is greater than 0, we will do
the two assignments included in the parentheses. In C, a curly brace is used
to represent a block of code that is subordinate to the if. The average
programmer also adds indentation to the C language to distinguish the
dependencies of instructions. But indenting is not mandatory. The following
does not indent the code, in the C language can also be normal execution,
and above the results of the program run no difference:
In Python, the same program must be written in the following form:
if price> 0:
selling= 1
buying= 2

In Python, the parentheses around 0 are removed, the semicolon at the end
of each statement is removed, and the curly brackets around the block are
also removed. There’s more—the colon (:) and the indentation of four
spaces in front of 1 and 2. By indenting, Python recognizes that both
statements are subordinate to the if. Indenting in Python is mandatory in
order to distinguish between dependencies. The following procedure will
have an entirely different effect:
if price> 0:
selling= 1
buying= 2
Here, only selling that is 1 is subordinate to if, and the second assignment is
no longer subordinate to if. In any case, buying will be assigned to 2.
It should be said that most mainstream languages today, such as C, C++,
Java, JavaScript, mark blocks with curly braces, and indentation is not
mandatory. This syntax is derived from the popularity of the C language.
On the other hand, while indenting is not mandatory, experienced
programmers write programs in these languages with indenting in order to
make them easier to read. Many editors also have the ability to indent
programs automatically. Python’s forced indentation may seem
counterintuitive, but it’s really just a matter of enforcing this convention at
the syntactic level so that programs look better and are easier to read. This
way of writing, with four spaces indented to indicate affiliation, is also seen
in other Python syntax constructs.

If Nesting and Elif


And then back to the choice of structure. The choice of structure frees the
program from the tedium of command-and-control permutations. A
program can have a branching structure inside of it. Depending on the
conditions, the same program can work in a volatile environment. With Elif
Syntax and nested use of if, programs can branch in a more colorful way.’
The next program uses the elif structure. Depending on the condition, the
program has three branches:
result= 1
if result > 0:# Condition 1. Since I is 1, this part will perform.
print("positive result")
result= result + 1
elif result == 0:
# Condition 2. This part is not executed.
print("result is 0")
result= result*10
Else:
# Condition 3. This part is not executed.
print("negative result")
result= result - 1

There are three blocks, led by if, Elif, and else. Python first detects the
condition of the if, skips the block that belongs to the if the condition is
false, and executes the else block if the condition of the Elif is still false.
The program executes only one of three branches, depending on the
condition. Since the result has a value of 1, only the if part is executed in
the end. In the same way, you can add more elif between if and else to
branch your program.
We can also nest an if structure inside another if structure:
result = 5
if result> 1: # This condition holds, execute the Internal Code
print("result bigger than 1")
print("nice")
if result > 2: # nested if structure, the condition holds.
print("result bigger than 2")
print("Its good than before")

After making the first if judgment, if the condition holds, the program runs
in sequence and encounters the second if construct. The program will
continue to judge and decide whether to execute based on the conditions.
The second subsequent block is indented four more spaces relative to the if
to become the “little brother.” Programs that further indent are subordinate
to the inner if. In general, with the if construct, we branch the program.
Depending on the conditions, the program will take a different path.

For Loop
Loops are used to iterate through blocks of code. In Python, loops are either
for or while. Let’s start with the for loop. From the selection structure in
section 2.3, we have seen how to use indentation to represent the
membership of a block. Loops use similar notation. Programs that are part
of a loop and need to be repeated are indented, such as:
for input in [4,6.8,"love"]:
print(c) #prints each element in the list in turn

The loop is to take an element from the list [4,6,8, “love”] one at a time,
assign it to c, and execute the line belongs to the program for, which calls
the print() function to print out the element. As you can see, one of the basic
uses of for is to follow in with a sequence:

For Element in Sequence


The number of elements in a sequence determines the number of iterations.
There are three elements in the example, so print() will be executed three
times. That is, the number of repetitions of the for loop is fixed. The for
loop, in turn, takes elements from the sequence and assigns them to the
variable immediately after the for (a) in the example above. Therefore, even
though the statements executed are the same, the effect of the same
statement changes after three executions because the data also changes.
One of the conveniences of a for loop is to take an element from a
sequence, assign it to a variable, and use it in a membership program. But
sometimes, if we simply want to repeat a certain number of times and don’t
want to create a sequence, we can use the range() function provided by
Python:
for result in range(3):
print("This is crazy") # print "This is crazy" Three Times

The 3-way range() function in the program indicates the number of times
you want to repeat. As a result, the program that belongs to for is executed
five times. Here, after the for loop, there is still the variable I, which counts
for each loop:
for result in range(7):
print(result, "This is crazy") # prints the sequence number and "This is crazy"

As you can see, the count provided by range() in Python also starts at 0, the
same as the index of the table. We also saw a new use of print(), which is to
specify multiple variables in parentheses, separated by commas. The
function print() prints them all.
Let’s look at a practical example of a for loop. We have previously used
Tuples to record the yearly interest rate on a mortgage.
thisisinteresttuple= (0.06, 0.07, 0.08, 0.1, 0.3)

If there is a 200,000-dollar mortgage, and the principal is unchanged, then


the annual interest to pay is how much? Using the for loop:
price= 200000
for interest in thisisinteresttuple:
debt= price* interest
Print ("you need to pay ", debt)

While Loop
There is also a loop structure in Python, the while loop. The use of the
While is:
check= 0
while check< 20:
print(check)
check= check+ 1
# prints from 0 to 19

A while is followed by a condition. If the condition is true, the while loop


continues to execute the statements that belong to it. The program will only
stop if the condition is false. In the while membership, we change the
variable I that participates in conditional judgments until it becomes 10 so
that the loop is terminated before the condition is met. This is a common
practice for the while loop. Otherwise, if the while condition is always true,
it becomes an infinite loop.
Once there is an infinite loop, the program will continue to run until the
program was interrupted or the computer shuts down. But sometimes,
infinite loops can be useful. Many graphics programs have an infinite loop
to check the status of the page and so on. If we were to develop an infinite
ticket-snatching program, an infinite loop would sound good. The infinite
loop can be written in a simple way:
while
True:
print("This is crazy")

Skip or Abort
The loop structure also provides two useful statements that can be used
inside the loop structure to skip or terminate the loop. Continue skips this
execution of the loop for the next loop operation. Break stops the whole
loop.
for result in range(20):
if result == 2:
Continue
print(result)
# prints 0,1,3,4,5,6,7,8,9,11,13,15,17,19 notice that you skipped 2

When the loop executes until the result is 2, the if condition holds,
triggering continue, instead of printing result at this point—the program
proceeds to the next loop, assigns 3 to result, and continues to execute the
for subordinate statement. The continue simply skips a loop, while the
break is much more violent, and it terminates the entire loop.
for result in range(20):
if result == 2:
Break
Print (result) # prints only 0 and 1

When the loop reaches 2, the if condition holds, triggers the break, and the
entire loop stops. The program no longer executes the statements inside the
for loop.

Small Exercise to Review What We Learned Until


Now
In this chapter, we learned about operations and variables, as well as about
the selection and circulation of two process control structures. Now, let’s do
a more complicated exercise and go over what we learned together.
Suppose I could get a full loan to buy a house. The total price of the house
is half a million. In order to attract buyers, the first four years of mortgage
interest rate discount, respectively 3%, 4%, 5%, 6%. For the rest of the year,
the mortgage rate was 5% a year. I pay back the money year after year, up
to 200000 $ each time. So, how many years will it take to pay off the house
fully?
Think about how you can solve this problem with Python. If you think
clearly, you can write a program to try it. The best way to learn to program
is to get your hands dirty and try to solve problems. The following is the
author’s solution, for reference only:
price= 0
selling= 200000
thisisinterest= (0.03, 0.04, 0.05, 0.06)
debt= 10000
while debt > 0:
price= price + 1
Print ("yes", "result", "year or money")
if result<= 4:
interest = thisisinterest[result- 1]
# The subscript of the sequence starts at zero
Else:
interest = 0.05
debt= debt* (debt + 1) - price
Print ("yes", result + 1, "the year is finally over")
# secretly, the 23rd year is over
Chapter 8: Functions and Modules
In this chapter, we’ll look at other process-oriented encapsulation methods
—namely, functions and modules. Functions and modules encapsulate
chunked instructions into blocks of code that can be called repeatedly and
organize a set of interfaces with function names and module names to
facilitate future calls.

What Are Functions?


It’s a bit of a pain because, whenever you think of functions, you will
remember the topic you have learned in mathematics. In mathematics, a
function represents a correspondence between sets. For example, all books
are a collection, and all pens are a collection. There is a correspondence
between the set of books and the set of pens, which can be expressed as a
function.
Let’s take one mathematical example. The following square function maps
a natural number to the cubes of the natural number:
f(x) = x 3

(where X is a natural number)


In other words, the function f (x) defines the correspondence between two
sets of numbers:
x -> y
11
28
3 27
4 64
..
..

A mathematical function defines a static correspondence. From a data point


of view, a function is like a magic box that transforms a walking pig into a
rabbit. For the function f (x) just defined, what goes in is a natural number,
and what comes out is the cube of that natural number. With the function,
we implement the data transformation.
The magic transformation of a function does not happen out of thin air. For
a function in programming, we can use a series of instructions to show how
the function works. In the programming function is the realization data
transformation, but also may, through the instruction, realize other
functions. So, the programmer can also understand functions from the
perspective of program encapsulation.
For programmers, a function is such a syntactic construct. It encapsulates a
number of commands into a single punch. Once the function is defined, we
can start the combination by calling the function. Therefore, a function is an
exercise in encapsulation philosophy. The input data is called a parameter,
which affects the behavior of the function. It’s as if the same combination
can have different levels of power.
Thus, we have three ways of looking at functions: The correspondence of
sets, the magic box of data, and the encapsulation of statements.
Programming textbooks generally choose one of these to describe what a
function is. All three explanations are correct. The only difference is the
perspective. By cross-referencing the three interchangeable interpretations,
you can better understand what a function is.

Defining Functions
Let’s first make a function. The process of making a function is also called
defining a function. We call this function squareofnum(). As the name
suggests, the function calculates the sum of the squares of two numbers:
def squareofnum(first,second):
first = first**2
second = second**2
result = first+ second
return result

The first keyword to appear was “def”. This keyword tells Python, “here
comes the definition of the function.” The keyword def is followed by
squareofnum, the name of the function. After the function name, there are
parentheses to indicate which arguments the function takes—namely, first
and second—in parentheses. Parameters can be multiple in number or none
at all. According to Python Syntax, the parentheses following a function
should be preserved even if no input data is available.
In defining the function, we use symbols(variables) first and second to refer
to the input data. Until we actually use the function, we won’t be able to
specify what numbers first and second are. So, defining a function is like
practicing martial arts. When you actually call a function, you use the actual
input data to determine how hard you want to hit it. Arguments function
like variables inside a function definition, participating in any line of
instruction in a symbolic form. Because the parameter in a function
definition is a formal representation, not real data, it is also called a
parameter.
In defining the function squareofnum(), we complete the symbolic square
summation with parameters first and second. In the execution of a function,
the data represented by the parameter does indeed exist as a variable, as we
will elaborate on later.
At the end of the parentheses, you come to the end of the first line. There’s
a colon at the end, and the last four lines are indented. We can infer that the
colon and indenting represent the subordination of the code. So, the four
indented lines of code that follow are the kids of the function
squareofnum(). A function is an encapsulation of code. When a function is
called, Python executes the statements that belong to the function until the
end of the statement. For squareofnum(), the first three lines are the familiar
arithmetic statements. The last sentence is returned. The keyword return is
used to describe the return value of a function, which is the function’s
output data.
As the last sentence of a function, the function ends on return, regardless of
whether there are other function definition statements after it. If you replace
squareofnum() with the following:
def squareofnum(first,second):
first = first **2
second = second**2
result = first+ second
return result
Print (“ Here ends the result”)

Then, when the function executes, it will only execute to return result. The
latter statement, print(), although also a function, is not executed. So return
also has the ability to abort the function and specify the return value. In
Python Syntax, return is not required. If there is no return, or if there is no
return value after return, the function returns None. None is the empty data
in Python, used to indicate nothing. The Return Keyword also returns
multiple values. Multiple values are followed by return, separated by
Commas.

How to Call Functions?


Above, we see how to define a function. Defining a function is like building
a weapon, but you have to use it to make it work. The procedure that uses a
Function is called a Call Function. In the previous chapter, we’ve seen how
to call the print() function:
print("Hello python universe!")

We use the function name directly, with specific arguments in parentheses.


The argument is no longer the symbol used to define the function, but the
actual data string, “Hello Python universe!” Therefore, arguments that
occur during a function call are called arguments.
The function print() returns a value of None, so we don’t care about the
return value. But if a function has another return value, then we can get the
return value. A common practice is to assign the return value to a variable
for later use. The squareofnum() function is called in the following
program:
result =squareofnum(4,6)
print(result)

Python knows that 4 corresponds to the first parameter in the function


definition, 6 corresponds to the second parameter second and passes the
parameter to squareofnum(). The function squareofnum() executes the
internal statement until it returns the value 52. The return value of 52 is
assigned to the variable result, which is printed by print().
Function calls are written as they were written after def, the first line of the
function definition. It’s just that when we call a function, we put real data in
parentheses and pass it as an argument to the function. In addition to
specific data expressions, parameters can be variables that already exist in
the program, such as:
first = 8
second = 9
result = squareofnum(first, second)
print(x)
Function Documentation
Functions can encapsulate code and reuse it. For some frequently called
programs, if you can write a function and call it every time, it will reduce
the amount of repetitive programming. However, too many functions can
cause problems. The common problem is that we often forget what a
function is supposed to do. Of course, you can find the code that defines the
function, read it line by line, and try to understand what you or someone
else is trying to do with it. But the process sounds painful. If you want your
future self or others to avoid similar pain, you need to write functions with
clear documentation of what the functions do and how they are used.
We can use the built-in function help() to find the documentation for a
function. Take the function min() for example. Use this function to return
the maximum value. For example:
variable= min(6,7,9,12)
print(variable)
# result is 6

The function min() takes multiple arguments and returns the largest of
them. If we can’t remember the function min() and its parameters, we can
ask help().
>>> help(min)

Help on built-in function max in module __builtin__:


min(...)
min(iterable[, key=func]) -> value
min(d, e, f,...[, key=func]) -> value
With a single iterable argument, return its largest item.
With two or more arguments, return the largest argument.
(END)

As you can see, the function Min() is called in two ways. Our previous call
was in the second way. In addition, the documentation explains the basic
functions of the function Min().
The function Min() belongs to Python’s own defined built-in function, so
the documentation is ready in advance. For our custom functions, we need
to do it ourselves. The process is not complicated, so here’s a simple
notation for the function cube():
def sumdefined(first,second):
"""return the square sum of two arguments"""
first= first**2
second= second**2
third = first+ second
return third

At the beginning of the function content, a multi-line comment is added.


This multi-line comment is also indented. This will be the documentation
for the function. If I use the function help() to view the documentation for
square(), help() will return what we wrote when we defined the function:
>>>help(sumdefined)
Help on function sumdefined in module __main__:
sumdefined(a, b)
return the square sum of two arguments

In general, the documentation should be as detailed as possible, especially


for the parameters and return values that people care about.

Parameter Passing
Basic Pass Parameters
Passing data as arguments to a function is called parameter passing. If there
is only one parameter, then parameter passing is simple, simply mapping
the only data entered at the time of the function call to this parameter. If you
have more than one parameter, Python determines which parameter the data
corresponds to when you call the function based on its location. For
example:
def argument(first, second, third):
"""print arguments according to their sequence"""
print(first, second, third)
print_arguments(1, 3, 5)
print_arguments(5, 3, 1)
print_arguments(3, 5, 1)

In each of the three calls to the program, Python determines the relationship
between the arguments by their locations. If you find that positional
arguments are rigid, you can pass them in the form of keywords. When we
define a function, we give the parameter a symbolic token, which is the
parameter name. Keyword passing is based on the parameter name to make
the data and symbols on the corresponding. Therefore, if the keyword is
passed in at the time of the call, the correspondence of the location is not
followed. Use The function definition above and pass it as a parameter
instead:
print_arguments(third=5,second=3,first=1)

As you can see from the results, Python no longer uses locations to
correspond to parameters but instead uses the names of parameters to
correspond to parameters and data. Positional and keyword passing can be
used together, with one part of the argument being passed based on location
and the other on the name of the argument. When a function is called, all
positional arguments appear before the keyword arguments. Therefore, you
can call:
print_arguments(1, third=5,second=3)

But if you put the position parameter 1 after the keyword parameter C5,
Python will report an error:
print_arguments(third=5, 1, second=3)

Position Passing and keyword passing allow data to correspond to formal


parameters, so the number of data and formal parameters should be the
same. But when defining a function, we can set default values for certain
parameters. If we do not provide specific data for these parameters when we
invoke them, they will take the default values defined, such as:
def f(first,second,third=10):
return first+second+third
print(f(3,2,1))
print(f(3,2))

The first time you call the function, you enter three data, which corresponds
to three parameters, so the parameter C corresponds to 1. On the second call
to the function, we provided only 3 and 2 data. The function maps 3 and 2
to the shape parameters A and B, depending on the position. By the time we
get to parameter C, there is no more data, so C will take its default value of
10.
Pass the Parcel
All of the above methods of passing arguments require that the number of
arguments is specified when defining the function. But sometimes when we
define a function, we don’t know the number of arguments. There are many
reasons for this, and sometimes it’s true that you don’t know the number of
parameters until the program is running. Sometimes, you want the function
to be more loosely defined so that it can be used for different types of calls.
At this point, it can be very useful to pass parameters by packing them.
As before, the package pass-through takes the form of a location and a
keyword. Here’s an example of a package position pass:
def place(*all_arguments):
print(type(all_arguments))
print(all_arguments)
postion (2,7,9)
position(5,6,7,1,2,3)

Both calls are based on the same package() definition, although the number
of parameters is different. Calling package(), all data is collected into a
tuple in sequence. Inside the function, we can read incoming data through
tuples. That’s the package. Pass it on. In order to remind you that the
Python parameter all is a package. When we define package(), we prefix the
tuple name all with an asterisk.
Let’s take another look at the package keyword pass example. This
parameter passing method collects the incoming data into a dictionary:
def package(**arguments):
print(type(arguments))
print(arguments)
package(first=1,second=9)
package(fourth=2,fifth=1,third=11)

Similar to the previous example, when a function is called, all parameters


are collected into a data container. However, when the package keyword is
passed, the data container is no longer a tuple, but a dictionary. Each
parameter call, in the form of a keyword, becomes an element of the
dictionary. The parameter name becomes the element’s key, and the data
becomes the element’s value. All the parameters are collected and passed to
the function. As a reminder, the parameter all is the dictionary used to
package keyword delivery, so you prefix all with * *.
The package location pass parameter and the package keyword pass
parameter can also be used together. For example:
def package(*place, **keywords):
print(place)
print(keywords)
package(1, 2, 3, a=7, b=8, c=9)

You can go a step further and mix the package pass with the basic pass.
They appear in the order is Location → Keyword → package location →
package keyword. With parcel passing, we have more flexibility in
representing data when defining functions.

Unwrap
In addition to being used for function definition, * and * * Can also be used
for function calls. In this case, both are to implement a syntax called
unpacking. UNWRAPPING allows us to pass a data container to the
function and automatically decompose it into arguments. It should be noted
that the package transferring and UNWRAPPING is not the opposite
operation, but two relatively independent functions. Here’s an example of
UNWRAPPING:
def packagediscontinue(first,second,third):
print(first,second,third)
args = (1233,42)
packagediscontinue(*args)

In this example, packagediscontinue() uses the basic pass-through method.


The function takes three arguments and passes them by position. But when
we call this function, we know about the package. As you can see, we are
passing a tuple when we call the function. A tuple cannot correspond to
three parameters in the way a primitive argument is passed. But we can do
that by prefacing “args” with an asterisk (*).
Remind Python that I want to break a tuple into three elements, each of
which corresponds to a positional parameter of the function. Thus, the three
elements of a tuple are assigned three parameters.
Accordingly, the dictionary can also be used for Unwrapping, using the
same unpackage() definition:
args = {“first”:1, “second”:2, “third ”:3}
packagediscontinue(**args)

Then, when passing the dictionary args, each key-value pair of the
dictionary is passed as a keyword to the function packagediscontinue().
Unwrapping is used for function calls. When a function is called, several
arguments can also be passed in a mix. It’s still the same basic principle:
Location → Keyword → location Unwrapping → keyword unwrapping.

Recursion
GAUSS and Mathematical Induction
Recursion is the operation of a function call itself. Before we get to
recursion, let’s take a look at a short story by the mathematician GAUSS. It
is said that once the teacher punished the whole class by having to work out
the sum of 1 to 100 before going home. Only seven years old, Gauss came
up with a clever solution that became known as the gauss summation
formula. Here’s how we’ll solve GAUSS’s summation programmatically:
addition= 0
for result in range(1, 101):
addition= addition + i
print(addition)

As the program shows, a loop is a natural way to solve a problem. But this
is not the only solution, we can also solve the problem in the following
ways:
def sumgauss(result):
if result == 1:
return 1
Else:
return result+ sumgauss(result-1)
print(sumgauss(100))

The above solution uses Recursion, in which the function itself is called
within a function definition. In order to ensure that the computer does not
get stuck in a loop, recursion requires that the program have a Base Case
that can be reached. The key to recursion is to show the join condition
between the next two steps. For example, we already know the cumulative
sum of 1 to 64, which is Gaussian(64), then the summation of 1 to 64 can
easily be found: Gaussian(64) Gaussian(63) + 62.
When we use recursive programming, we start with the end result, which is
that in order to find Gaussian(100), the computer breaks the calculation
down to find Gaussian(99) and to find Gaussian(99) plus 100. And so on,
until you break it down into Gaussian(1), then you trigger the termination
condition, which is N1 in the if structure, to return a specific number 1.
Although the whole process of recursion is complicated, when we write a
program, we only need to focus on the initial conditions, the termination
conditions, and the join, not on the specific steps. The computer will be in
charge of the execution.
Recursion comes from mathematical induction. Mathematical Induction is a
Mathematical proof, often used to prove that a proposition is valid in the
range of natural numbers. With the development of modern mathematics,
proofs within the scope of natural numbers have actually formed the basis
of many other fields, such as mathematical analysis and number theory, so
mathematical induction is of vital importance to the whole mathematical
system.
The mathematical induction itself is very simple. If we want to prove a
proposition for the natural number N, then:
The first step is to prove that the proposition holds for n 1.
The second step is to prove that the proposition holds for N + 1 under the
assumption that N is an arbitrary natural number.

Proof of the Proposition


Think about the two steps above. They actually mean that the proposition
holds for N1 → the proposition holds for N2 → the proposition holds for
N3, and so on, until infinity. Therefore, the proposition holds for any
natural number. It’s like a domino. We make sure that N goes down, causes
N + 1 to go down, and then we just push down the first domino to make
sure that any domino goes down.

Function Stack
Recursion in the program requires the use of the Stack data structure. The
so-called data structure is the organization of data stored by a computer.
The stack is a kind of data structure, which can store data in an orderly way.
The most prominent feature of the stack is “Lifo, Last In, First Out”. When
we store a stack of books in a box, the books we store first are at the bottom
of the box, and the books we store later are at the top. We have to take the
books out of the back so that we can see and take out the books that were in
the first place. This is Lifo. The stack is similar to this book box, only “last
in, first out”. Each book, that is, each element of the stack, is called a frame.
The stack supports only two operations: Pop and push. The stack uses pop
operations to get the top element of the stack and push operations to store a
new element at the top of the stack.
As we said before, in order to Compute Gaussian(100), we need to pause
Gaussian(100) and start computing Gaussian(99). To calculate
Gaussian(99), pause Gaussian(99), and call Gaussian(98), and so on. There
will be many incomplete function calls before the termination condition is
triggered. Each time a function is called, we push a new frame into the
stack to hold information about the function call. The stack grows until we
figure out Gaussian(1), then we go back to Gaussian(2), Gaussian(3), and
so on. Because the stack is “backward advanced.”
“Out” feature, so each time just pop up the stack frame, it is what we need
Gaussian(2), Gaussian(3), and so on until the pop-up hidden in the bottom
frame Gaussian(100).
Therefore, the process of running a program can be seen as a stack first
growth and then destroy the stacking process. Each function call is
accompanied by a frame being pushed onto the stack. If there is a function
call inside the function, another frame is added to the stack. When the
function returns, the corresponding frame is pushed off the stack. At the end
of the program, the stack is cleared, and the program is complete.

Scope of Variables
With a function stack in place, the scope of a variable becomes simple. A
new variable can be created inside a function, such as the following:
def variable(first, second):
third= first+ second
return third
print(variable(4, 6))

In fact, Python looks for variables in more than the current frame. It also
looks for variables that are defined outside the function in Python’s main
program. So, inside a function, we can “see” variables that already exist
outside the function. For example, here’s the program:
def variable():
print(result)
result= 5
inner_var()
When a variable is already in the main program, the function call can create
another variable with the same name by assigning it. The function takes
precedence over the variable in its own function frame. In the following
program, both the main program and the function external() have an info
variable. Inside the function external(), the info inside the function is used
first:
def externalvariable():
detail = "Authors Python"
print(detal)
detail= "This is crazy"
externalvariable()
print(detail)

And the function uses its own internal copy, so the internal action to Info
does not affect the external variable info. The arguments to a function are
similar to the arguments inside the function. We can think of a parameter as
a variable inside a function. When a function call is made, the data is
assigned to these variables. When the function returns, the variables
associated with these parameters are cleared. But there are exceptions, such
as the following:
When we pass a table to a function, table B outside the function changes.
When the argument is a data container, there is only one data container
inside and outside of the function, so the operation inside the function on
the data container affects the outside of the function. This involves a subtle
mechanism in Python that we’ll explore in more detail. Now, it is important
to remember that for the data container, changes inside the function affect
the outside.

Introducing Modules
There used to be a popular technical discussion on the Internet: “How do
you kill a dragon with a programming language?” There were many
interesting answers, such as the Java Language “Get out there, find the
Dragon, develop a multi-tier dragon kill framework, and write a few articles
about it... but the dragon wasn’t killed.” The answer was a mockery of
Java’s complex framework. “C.”
The language is: “Get there, ignore the Dragon, raise your sword, cut off the
Dragon’s head, find the princess... hang the princess.” The answer is to
praise the power of the C language and the commitment of the C
community to the Linux kernel. As for Python, it’s simple:
Import functionlibrary;

People who know Python modules smile at this line of code. In Python, A.
The. Py file constitutes a module. With modules, you can call functions in
other files. The import module was introduced to reuse existing Python
programs in new programs. Python, through modules, allows you to call
functions in other files. Let’s start with a first.py document that reads as
follows:
def smile():
print("HaHaHaHa")

And write a laugh.py file in the same directory. Introduce the first module
into the program:
from first import smile
for result in range(10):
smile()

With the import statement, we can use the smile() function defined in the
laugh.py in the URL. In addition to functions, we can also introduce data
contained in other files. Let’s say we’re in a module (trail.Py). Write:
text = "Hello programmer"

In import second.Py, we introduce this variable:


from import_demo import text
Print (text) # prints 'Hello programmer"

For process-oriented languages, a module is a higher-level encapsulation


pattern than a function. Programs can be reused in units of files. A typical
process-oriented language, such as C language, has a complete module
system. The so-called Library consists of common functions programmed
into modules for future use. Because Python’s libraries are so rich, much of
the work can be done by reference libraries, drawing on the work of
previous generations. That’s why Python uses the import statement to kill
dragons.
Search Path
When we introduced the module just now, we put the library file and the
application file in the same folder. When you run a program under this
folder, Python automatically searches the current folder for modules it
wants to introduce.
However, Python also goes to other places to find libraries:
(1) the installation path of the Standard Library
(2) the path contained in the operating system environment variable
PYTHONPATH
The Standard Library is an official library of Python. Python automatically
searches the path where the standard library is located. As a result, Python
always correctly introduces modules from the Standard Library. For
example:
import time

If you are a custom module, put it where you see fit and change the Python
path environment. When Python path contains the path of a module, Python
finds that module.
When Python introduces a module, it goes to the search path to find the
module. If the introduction fails, it is possible that the search path was set
incorrectly. We can set the search path as follows.
Inside Python, you can query the search path in the following way:
>>>import best
>>>print(best.path)

As you can see, best.path is a list. Each element in the list is a path that will
be searched. You can control the search path for Python by adding or
removing elements from this list.
The above change method is dynamic, so each time you write the program,
you add related changes. We can also set the PYTHONPATH environment
variable to change the Python search path statically. On Linux, you can use
them. Add the following line to the bashrc file to change the
PYTHONPATH:
export PYTHONPATH=/home/user/mylib:$PYTHONPATH

The meaning of this line is to add /home/user/mylib to the original


PYTHONPATH. The files that need to be modified under the MAC are
under the home folder. The modification method is similar to Linux.

You can also set a PYTHONPATH on Windows. Right-click your computer


and select properties from the menu. A system window appears. Click the
advanced system settings, and a window called system properties appears.
Select the environment variable, add a new variable to the PYTHONPATH,
and set the value of that variable to the path you want to search for.

Installation of Third-Party Modules


In addition to the modules in the standard library, there are many third-party
contributed Python modules. The most common way to install these
modules is to use PIP. PIP is also installed on your computer when you
install Python. If you want to install third-party modules such as Numpy,
you can do so in the following manner:
$pip install numpy

If you use VIRTUALENV, each virtual environment provides a


corresponding pip to the Python version of the virtual environment. When
you use a pip in an environment, the module is installed into that virtual
environment. If you switch to virtuality, the modules you use and the
versions of the modules you use will change, avoiding the embarrassment
of modules not matching the Python version.

Additional tools for installing third-party modules are available under EPD
Python and Anaconda and can be found at the official website. You can use
the following command to find all installed modules, as well as the version
of the module:
$pip freeze
Chapter 9: Reading and Writing Files in Python
Once you understand the basics of object-oriented programming, you can
take advantage of the wide variety of objects in Python. These objects can
provide a wealth of functionality, as we will see in this chapter for file
reading and writing, as well as time and date management. Once we get
used to these powerful objects, we can implement a lot of useful features.
The fun of programming is to implement these functions in a program and
actually run them on a computer.

Storage
Documents
We know that all the data in Python is stored in memory. When a computer
loses power, it’s like having amnesia, and the data in the memory goes
away. On the other hand, if a Python program finishes running, the memory
allocated to that program is also emptied. For long-term persistence, Python
must store data on disk. That way, even if the power goes out or the
program ends, the data will still be there.
The disk does store data in file units. For a computer, the essence of data is
an ordered sequence of binary numbers. If you take a sequence of bytes,
that is, every eight bits of a binary number, then that sequence of data is
called text. This is because an 8-bit sequence of binary numbers
corresponds to exactly one character in ASCII encoding. Python, on the
other hand, can read and write files with the aid of text objects.
In Python, you can create file objects with the built-in function open. When
you call open, you need to specify the file name and how to open the file:
f = open (filename,method)

A filename is the name of a file that exists on disk. Common ways to open a
file are:
"r" # to read an existing file
“w” # create a new file and write
“a” # if the file exists, write to the end of the file. If the file does not exist, a new file is created and
written

For example:
>>>f = open("harrypotter.txt","r")

This will instruct Python to read a text file named harrypotter. It’s a read-
only way to open a file called harrypotter.txt.
With the object returned above, we can read the file:
file= f.read(30)
# Read 30 bytes of data
file= f.readline()
# read a line
file= f.readlines()
# read all the rows and store them in a list, one row for each element.

If it is opened “w” or “a”, then we can write text:


f = open("Harry.txt", "w")
f.write("This is hogwarts")
# write " This is hogwarts" to the file

If you want to write a line, you need to add a newline at the end of the
string. On UNIX systems, the line feed is “\n”. In Windows, the line feed is
“\r \n”.
Example:
f.write(" This is the chamber of secrets \n") # UNIX
f.write("This is the chamber of secrets \r\n") # Windows

Opening the file port takes up computer resources, so close the file in a
timely manner with the close method of the file object after reading and
writing.
f.close()

Context Manager
File operations are often used with context managers. The context manager
is used in specifying the scope of use for an object. Once in or out of this
scope, special actions are called, such as allocating or freeing memory for
an object. The context manager can be used for file operations. For file
operations, we need to close the file at the end of the read and write.
Programmers often forget to close files, taking up resources unnecessarily.
The context manager can automatically close files when they are not
needed.
Here is an example of file manipulation.
f = open("Ron.txt", "w")
print(f.closed) # check to see if the file is open
f.write("I love Quidditch")
f.close()
print(f.closed) # print True

If we add the context manager syntax, we can rewrite the program to:
# use the context manager
with open("new.txt", "w") as f:
f.write("Hello World!")
print(f.closed)

The second part of the program uses with... as... Structure. The context
manager has a block that belongs to it, and when the execution of that block
ends, that is, when the statement is no longer indented, the context manager
automatically closes the file. In the program, we call F. Closed property to
verify that it is closed. With a context manager, we use indentation to
express the open range of a file object. For Complex programs, the presence
of indenting makes the coder more aware of the stages at which a file is
opened, reducing the possibility of forgetting to close the file.
The above context manager is based on the () special method of the f object.
When using the syntax of the context manager, Python calls the () method
of the file object before entering the block, and the file pair at the end of the
Block. The () method of image. In the () method of the file object, there is
self. Close () statement. Therefore, we do not have to close the file in clear
text when using the context manager.
Any object that defines a () method and a () method can be used by the
context manager. Next, we customize a class ram and define its () and ()
methods. Thus, objects from the Vow class can be used for the context
manager:
Program code is given below:
class ram(object):
def __init__(ayodhya, text):
ayodhya.text = text
def __enter__(ayodhya):
ayodhya.text = "You know " + Ayodhya.text
return ayodhya
def __exit__(ayodhya,exc_type,exc_value,traceback):
ayodhya.text = ayodhya.text + "!"
with ram("Its a kingdom") as myram:
print(myram.text)
print(myram.text)

The output looks as follows:


You know: Its a kingdom!
You know: Its a kingdom!

When the object is initialized, the text property of the object is “Its a
kingdom”. As you can see, the object invokes the () and () methods as it
enters and leaves the context, causing the text property of the object to
change.

Pickle Pack
We can store the text in a file. But the most common objects in Python are
objects that disappear from memory when the program ends, or the
computer shuts down. Thus, can we save objects to disk?
You can do this with Python’s pickle package. Pickle means pickle in
English. Sailors at the Age of Discovery used to make pickles out of
vegetables and take them with them in cans. Pickle in Python has a similar
meaning. With the pickle package, we can save an object and store it as a
file on disk.
In fact, objects are stored in two steps. The first step is to grab the object’s
data out of memory and convert it into an ordered text, called Serialization.
The second step is to save the text to a file. When we need to, we read the
text from the file and then put it into memory, we can get the original
object. Here’s a concrete example, starting with the first step of
serialization, which converts an object in memory into a text stream:
import pickle
class Animal(object):
have_trunk = True
howtheyreproduce= "zygote"
winter= animal()
pickle_string = pickle.dumps(winter)
Using the pickle package’s dumps() method, you can convert an object to a
string. We then store the string in a file using the byte text storage method.
Step 2:
with open("winter.pkl", "wb") as f:
f.write(pickle_string)

The above procedure is deliberately divided into two steps to illustrate the
whole process better. Instead, we can take a dump() approach and do two
steps at a time:
import pickle
class Animal(object):
have_trunk = True
howtheyreproduce= "zygote"
winter= animal()
with open("winter.pkl", "w") as f:
pickle.dump(winter, f)

Object winter will be stored in the file winter (in the PKL). With this file,
we can read the object if necessary. The process of reading an object is the
opposite of that of storing it. First, we read the text from the file. Then,
using the pickle’s load() method, we convert the text as a string to an object.
We can also combine the above two steps using the pickled load() approach.
Sometimes, just reversing the recovery is not enough. An object depends on
its class, so when Python creates an object, it needs to find the appropriate
class. So when we read an object from the text, the class must already have
been defined in the program. For built-in classes that Python always has,
such as lists, dictionaries, strings, and so on, you don’t need to define them
in your program. For a userdefined class, however, you must first define the
class before you can load its objects from the file.
Here is an example of a read object:
import pickle
class animal(object):
have_trunk = True
howtheyreproduce= "zygote"
with open("winter.pkl", "rb") as f:
winter= pickle.load(f)
print(winter.have_trunk)
Chapter 10: Object-Oriented Programming Part
1
Having looked at Python’s process-oriented programming paradigm, we’ll
use a completely different programming paradigm in this chapter—object-
oriented. Python is not just a language that supports an object-oriented
paradigm. Under the multi-paradigm facade, Python uses objects to build its
large framework. As a result, we can get an early start in object-oriented
programming to understand Python’s deep magic.
To understand object orientation, you need to understand classes and
objects. Remember the process-oriented functions and modules that
improve the reusability of your program. Classes and objects also improve
program reusability. In addition, the class and object Syntax also enhances
the program’s ability to simulate the real world. “Emulation” is the very
heart of object-oriented programming.
The object-oriented paradigm can be traced back to the Simula language.
Kristen Nygaard is one of the co-authors of the language. He was recruited
by the Norwegian Ministry of Defence and then served at the Norwegian
Institute of Defence Sciences. As a trained mathematician, Kristen Nygaard
has been using computers to solve computational problems in defense, such
as nuclear reactor construction, fleet replenishment, and logistics supply,
etc. To solve these problems, Nygaard needed a computer to simulate the
real world. For example, such as what would happen if there was a nuclear
leak. Nygaard found that, following a procedural, instruction-based
approach to programming, it’s hard for him to program real-world
individuals. Take a boat, for example. We know it has some data, such as
height Degree, width, horsepower, draught, etc. It will also have some
movement, such as moving, accelerating, refueling, moored, and so on.
This ship is one Individuals. Some individuals can be grouped, such as
battleships and aircraft carriers are warships. Some individuals have
inclusive relationships, such as A ship has an anchor.
When people tell stories, they naturally describe individuals from the real
world. But for a computer that only knows the 0 / 1 sequence, it will just
mechanically execute instructions. Nygaard hopes that when he wants to do
computer simulations, it will be as easy as telling a story. He knew from his
military and civilian experience that such a programming language had
great potential. Eventually, he met Ole-Johan Dahl, a computer scientist.
VEGETA is helping Nygaard turned his idea into a novel language --
Simula. The name of the language is the very simulacrum that Nygaard
craves.
We can think of object orientation as a bridge between story and instruction.
The coder uses a story-based programming language.
The compiler then translates these programs into machine instructions. But
in the early days of computers, these extra translations consumed too many
computer resources. Therefore, the object-oriented programming paradigm
is not popular. Pure object-oriented languages are often criticized for their
inefficiencies.
With the improvement of computer performance, the problem of efficiency
is no longer a bottleneck. People turned their attention to the productivity of
programmers and began to explore the potential of object-oriented
languages. The first great success in the object-oriented world was the C++
language. Bjarne Stroustrup created the C++ language by adding object-
oriented syntax to the C language. C++ is a mixture of C language features,
so it looks very complex. Later versions of the Java language moved toward
a more purely object-oriented paradigm and quickly became commercially
successful. C++ and Java were once the most popular programming
languages. Microsoft’s later C# and Apple’s ongoing support for
OBJECTIVE-C were also typical object-oriented languages.
Python is also an object-oriented language. It’s older than Java. However,
Python allows programmers to use it in a purely procedural way, so its
object-oriented heart is sometimes overlooked. One of Python’s
philosophies is that “everything is an object.” Both the process-oriented
Paradigm we saw in Chapter 3, and the functional programming we will see
in the future, are, in fact, the result of special object simulations. Therefore,
learning object- orientation is a key part of learning Python. Only by
understanding Python’s objects can we see the full picture of the language.

Classes
When it comes to finding objects, the first thing we look at is a grammatical
structure called a class. The concept of class here is similar to the concept
of “class” in our daily life. In everyday life, we put similar things into a
group and give the Group A name. Animals, for example, have trunks in
common and reproduce by zygote. Any particular Animal is based on the
archetype of an Animal.
Here’s how we describe animals in Python:
class Animal(object):
trunk= True
howtheyreproduce= "zygote"

Here, we define a class with the keyword class. The class name is Animal.
In parentheses, there is a keyword object, which means something, that is,
an individual. In computer languages, we refer to individuals as objects. A
class does not. There can be more than one. Animals can include neighbor
elephant, the tiger running over the horizon, and a small yellow chicken
kept at home.
Colons and indenting describe the code that belongs to this class. In the
block of programs that fall under this category, we define two quantities—
one for the trunk, and the other for reproduction—called the attributes of
the class. The way we define animals is crude. Animals are just “hairy
things that reproduce.” Biologists would probably shake their heads if they
saw this, but we’re taking our first steps into a simulated world.
In addition to using data attributes to distinguish categories, we sometimes
also distinguish categories based on what these things can do. For example,
birds can move. In this way, the bird is distinguished from the type of
house. These actions have certain consequences—such as changes in
position due to movement. Some of these “behavior” properties are called
methods. In Python, you typically illustrate a method by defining a function
inside a class.
class Animal(object):
trunk= True
howtheyreproduce= "zygote"
def roar(self, sound):
print(sound)

We added a new method attribute to the animal, which is roar(). The


method roar() looks a lot like a function. Its first argument is self, which
refers to the object itself within the method, which I’ll explain in more
detail later. It should be emphasized that the first argument to a method
must be self, which refers to the object itself, whether or not the argument is
used. The rest of the parameter sound is designed to meet our needs, and it
represents the content of the bird song. The method roar() does print out the
sound.

Objects
We define classes, but as the function definition, this is still just a matter of
building weapons. To use this sharp tool, we need to go deep into the
object. By calling the class, we can create an object under the class. For
example, I have an animal named winter. It’s an object, and it belongs to an
animal. We use the previously defined animal to generate this object:
winter= Animal()

Use this sentence to create an object and explain that winter is an object that
belongs to Animals. Now, we can use the code already written in animal.
As an object, winter will have the properties and methods of an animal. A
reference to a property is made by a reference in the form of
object.attribute. For example:
print(winter.howtheyreproduce)

In the above way, we get the reproductive pattern of winter’s species.


In addition, we can call methods to get winter to do what the animal allows.
For example:
When we call the method, we pass only one parameter, the string “Roarrrr”.
This is where methods differ from functions. Although we have to add this
self parameter when defining a method of a class, self is only used inside
the class definition, so there is no need to pass data to self when calling the
method. By calling the roar() method, my winter can scream.
So far, the data describing the object has been stored in the properties of the
class. A generic attribute describes a generic feature of a class, such as the
fact that animals have trunks. All objects that belong to this class share
these properties. For example, winter is an object of animals, so winter has
trunks. Of course, we can refer to a class attribute through an object.
For all individuals within a class, individual differences may exist for
certain attributes. For example, my winter is pink, but not all animals are
pink. Let’s take the human class. Sex is a property of a person—not all
humans are male or female. The value of this property varies from object to
object. Tom is an object of the human race, and the sex is male. Estella is
also a human object; the sex is female.
Therefore, in order to fully describe the individual, in addition to the
generic class attributes, we need the object attributes used to describe the
personality.
In a class, we can manipulate the properties of an object through self. Now
we extend the Animal Class:
class Animal(object):
def roar(self, sound):
print(sound)
def whatcolor(self, color):
self.color = color
winter= Animal()
winter.whatcolor("pink")
print(winter.color)

In the method set(), we set the object’s property color with the self
parameter. As with class attributes, we can pass objects. Property to
manipulate object properties. Since the object attribute depends on self, we
must operate on the class attribute within a method. Therefore, object
properties cannot be assigned an initial value directly below the class, like
class properties.
Python does, however, provide a way to initialize object properties. Python
defines a series of special methods. In a very specific way, it’s called the
Magic Method. A programmer can set special methods in a class definition.
Python deals with special methods in a particular way. For the class()
method, Python calls it automatically every time an object is created.
Therefore, we can initialize object properties inside the () method:
class Animal(object):
def __init__(self, sound):
self.sound = sound
print("my roar is", sound)
def roar(self):
print(self.sound)
winter= Animal("Gurrrrr")
winter.roar()
In the above class definition, we show how the class is initialized using the
roar() method. Whenever an object is created The def_init_() method is
called when, for example, the summer object is created. It sets the sound
property of the object. You can call this object property through self in the
roar() method. In addition to setting object properties, we can also set the
Add additional instructions in self(). These instructions are executed when
the object is created. When a class is called, it can be followed by a
Parameter list. The data put in here will be passed to the parameters of ().
With the () method, we can initialize object properties at object creation
time.
In addition to manipulating object properties, the self parameter also has the
ability to call other methods of the same class within a method, such as:
class Animal(object):
def roar(self, sound):
print(sound)
def roarcontinous(self, sound, n):
for i in range(n):
self.roar(sound)
winter= Animal()
winter.roarcontinous("gurrr", 10)

In the method roar(), we call another method in the class, roar(), through
self.

Successors
Subclasses
The category itself can be further subdivided into subcategories. Animals,
for example, can be further divided into Amphibians and Reptiles. In
object-oriented programming, we express these concepts through
Inheritance.
class Animal(object):
trunk= True
howtheyreproduce= "zygote"
def roar(self, sound):
print(sound)
class Amphibian(Animal):
waytheywalk= "bywalk"
edible = True
class Reptile(Animal):
In the class definition, the parenthesis is Animal. This shows that
Amphibian belongs to a subclass of Animal, namely Amphibian inherited
from animal. Naturally, Animals are the father of Amphibians. Amphibians
will have all of Animals attributes. Although we only declare winter as an
amphibian, it inherits the properties of the parent class, such as the data
attribute trunk, and the method attribute roar(). The New Reptile class also
inherited from Animal. When you create a Reptile object, the object
automatically has Animal properties.
Obviously, we can use inheritance to reduce the repetition of Information
and statements in the program. If we define Amphibians separately,
Amphibian and reptiles, rather than animals, have to be entered into the
amphibian and reptile definitions separately. The whole process can become
tedious, so inheritance improves the program’s reusability. In the most basic
case, the object is inside the parentheses of the class definition. The class
object is actually a built-in class in Python. It serves as the ancestor of all
classes.
Classification is often the first step in understanding the world. We learn
about the world by classifying all sorts of things. Ever since our human
ancestors, we’ve been sorting. The 18th century was a time of great
maritime discoveries when European navigators went around the world,
bringing back specimens of plants and animals that had never been seen
before. People are excited about the proliferation of new species, but they
also struggle with how to classify them. Carl Linnaeus has proposed a
classification system that paves the way for further scientific discoveries
through the subordination of parent and child classes. Object-oriented
language and its inheritance mechanism are just the conscious classification
process of simulating human beings.
Attribute Overlay
As mentioned above, in the process of inheritance, we can enhance the
functionality of a subclass by adding attributes in which the parent class
does not exist. In addition, we can replace properties that already exist in
the parent class, such as:
Amphibian is a subspecies of Animal. In Amphibian, we define the method
roar(). This method is also defined in Animal. As you can see, the
Amphibian calls its own defined roar() method instead of the parent class.
In effect, it’s as if the method roar() in the parent class is overridden by the
namesake property in the child class.
By covering methods, we can radically change the behavior of subclasses.
But sometimes the behavior of a subclass is an extension of the behavior of
a parent class. At this point, we can use the super keyword to call methods
that are overridden in the parent class, such as:
In the chicken roar() method, we use super. It is a built-in class that
produces an object that refers to the parent class. Using super, we call the
methods of the parent class in the method with the same name as the child
class. In this way, the methods of a subclass can both perform related
operations in the parent class and define additional operations of their own.

What You Missed Out on All Those Years


List Objects
We went from the original “Hey its Jesus Christ!” all the way to the object.
As the saying goes, “Life is like a journey; what’s important is the scenery
along the way.” In fact, the previous chapters have seen objects many times.
However, at that time, the concept of the object hasn’t been introduced yet.
It’s time to look back at all the people we missed.
Let’s start with an acquaintance, a list in a data container. It’s a class, and
you can find the class name using the built-in function:
From the results returned, we know that a is a list type. In fact, a type is the
name of the class to which the object belongs. Each list belongs to this
class. This class comes with Python and is pre-defined, so it’s called a built-
in class. When we create a new table, we are actually creating an object of
the list class. There are two other built-in functions we can use to
investigate the class information further: Dir() and help(). The function
Dir() is used to query for all attributes of a class or object. You can try this.
We have used the help() function to query the documentation for the
function. It can also be used to display the class description document.
Returns are not only a description of the list class but also a brief
description of its properties. By the way, making a class description
document is similar to making a function description document. We just
need to add the desired description in a multiline string under the class
definition:
Pass in your program is a special Python keyword that says “do nothing” in
this syntax structure. This keyword preserves the structural integrity of the
program.
From the above query, we see that classes also have many “hidden skills.”
Some list methods, for example, return information about a list: The list is
greatly enhanced by the invocation of a method. Seeing the list again from
an object’s point of view feels like a great party.
Tuples and String Objects
Tuples, like lists, are considered as sequences. However, tuples cannot
change the contents. Therefore, Tuples can only be queried, not modified:
Strings are special tuples, so tuple methods can be executed.
Although strings are a type of Tuple, there are a number of ways that strings
(strings) can change strings. This sounds like a violation of the immutability
of Tuples. In fact, these methods do not modify the string object, but delete
the original string, and create a new string, so there is no violation of tuple
immutability. The following summarizes the methods for string objects.
STR is a string and sub is a substring of str. S is a sequence whose elements
are strings. Width is an integer indicating the width of the newly generated
string. These methods are often used for string processing.
Chapter 11: Object-Oriented Programming Part 2
We'll explore the meaning behind Python's “everything is an object.” Many
grammars—such as operators, element references, and built-in functions—
actually come from special objects. Such a design satisfies both Python’s
need for multiple paradigms and its need for rich syntaxes, such as operator
overloading and real-time features, with a simple architecture. In the second
half of this chapter, we delve into important mechanisms related to objects,
such as dynamic typing and garbage collection.

Operators
We know that list is the class of the list. If you look at the attributes of a list
with Dir(list), you see that one of the attributes is addition(). Stylistically,
addition() is a special method. What’s so special about it? This method
defines the meaning of the + operator for a list object. When two list objects
are added, the list is merged. The result is a consolidated list:
>>>print([10,32,43] + [45,26,49])
# This becomes [10,32,43,45,26,49]

Operators, such as +, -, and, and or are implemented in special ways, such


as:
"def" + "uvw"
#This will become defuvw

The following was actually done:


"def".addition("uvw")

Whether two objects can be added depends first on whether the


corresponding object has addition() method. Once the corresponding object
has an addition() method, we can perform an addition even if the object is
mathematically non-additive. Operators with the same function are simpler
and easier to write than special methods. Some of the following operations
can be tricky to write in a special way.
Try the following, see what it looks like, and think about the operator:
>>>(5).multiplication(4) # 5*4
>>>True.Or(False) # True or False
The special methods associated with these operations can also change the
way they are performed. For example, lists cannot be subtracted from each
other in Python. You can test the following:
>>>[3,5,7] - [5,7]

There is an error message that the list object cannot be subtracted, that is,
the list does not define the “-” operator. We can create a subclass of the list
and add a subtraction definition by adding() methods, such as:
class Addsubstraction(list):
def substraction(self, b): a = self[:]
b = b[:]
while len(b) > 0:
element_b = b.pop()
if element_b in a: a.remove(element_b)
return a print(Addsubstraction([1,2,3]) - Addsubstraction([3,4]))

In the example above, the built-in function Len() returns the total number of
elements contained in the list. The built-in function subtraction() defines the
operation of “-” to remove elements from the first table that appear in the
second table. So, the two Addsubstraction objects that we created can be
subtracted. Even if the sub() method has already been defined in the parent
class, the method in the child class overrides the method with the same
name as the parent class when redefined in the child class. That is, the
operator will be redefined.
The process of defining operators is useful for complex objects. For
example, humans have multiple attributes, such as name, age, and height.
We can define human comparison by age alone. So that you can, for your
own purposes, take what wasn’t there the operation in is added to the
object. If you’ve been in military training, you’ve probably played a game
of “turn left, turn right.” When the drill master shouts commands, you must
take the opposite action. For example, if you hear “turn to the left,” you
must turn to the right. In this game, the operators “turn left” and “turn right”
are actually redefined.

Element References
Here are some common table element references:
li = [23, 42, 34, 49, 65, 86]
print(li[4])
When the above program runs to Li[4], Python finds and understands the []
symbol and calls the getitem() method.
li = [23, 42, 34, 49, 65, 86]
print(li.getitem(4))

Output will be:


65

Take a look at the following and think about what it corresponds to:
li = [23, 42, 34, 49, 65, 86]
li.setitem(4, 60) print(li)

Output will be:


23,42,34,49,60,86

Just a Small Example for Dictionary Datatype


thisisdictionary= {"first":1, "second":2}
thisisdictionary.__delitem__("first")
print(thisisdictionary)
#prints first=1

Implementation of Built-In Functions


Like operators, many built-in functions are special methods that call
objects. For example:
len([34,24,35,89])
#This actually says how many numbers of elements are present

What it actually does is explained below in detail:


[34,24,35,89].__len__()

The built-in function len() also makes writing easier than _len_().
Try the following and think of its corresponding built-in function. These are
called mathematical functions if said in the right way.
(-69).__abs__() # prints the absolute value of the element
(2.3).__int__() # prints the nearest integer value of the element

There are many built-in mathematical functions that help to make


programming easier for inductive and statistical purposes.
Attribute Management
In inheritance, we mentioned the mechanism of property overwriting in
Python. In order to understand property coverage, it is necessary to
understand Python properties. When we call an attribute of an object, the
attribute may have many sources. In addition to the attribute from the object
and the attribute from the class, this attribute may be inherited from the
ancestor class. Properties owned by a class or object are recorded in. This is
a dictionary, the key is the attribute name, and the corresponding value is an
attribute. When Python looks for properties of an object, it looks for them in
the order of inheritance.
Let’s look at the following classes and objects, the Chicken class inherits
from the Bird class, and summer is an object of the Chicken class:
Below is the complex code:
class Animal(object):
trunk= True
def roar(self):
print("some roaring")
class Amphibians(Animal):
walk= False
def __init__(self, age):
self.age = age
def roar(self):
print("gurrr")
winter= Amphibian(4)
print("===> winter")
print(winter.__dict__)
print("===> Amphibian")
print(Amphibian.__dict__)
print("===> Animal")
print(Animal.__dict__)
print("===> result")
print(object.__dict__)

Here’s our output (Please don’t get confused because this is quite complex):
===> winter{'age': 4}
===> Amphibian {'walk': False, 'roar':, '__module__': '__main__', '__doc__': None, '__init__': }
===>Animal {'__module__': '__main__', 'roar':, '__dict__':, 'trunk': True, '__weakref__':, '__doc__':
None}
===>result {'__setattr__':, '__reduce_ex__'

The order is based on proximity to summer’s objects. The first part is about
the properties of the summer object itself, the age. The second part is about
the properties of the chicken class, such as the fly and _init_() methods. The
third part is the Bird class attribute, such as feather. The last part belongs to
the object class and has properties like this.
If we look at the properties of the object summer with the built-in Function
Dir, we see that the summer object contains all four parts. In other words,
the properties of objects are managed hierarchically. For all of the
properties that object summer has access to, there are four levels: summit,
Amphibian, Animal, and object. When we need to call a property, Python
will iterate down layer by layer until we find the property. Because objects
do not need to store the properties of their ancestor classes repeatedly, a
hierarchical management mechanism can save storage space.
An attribute may be defined repeatedly at different levels. As Python
traverses down, it picks the first one it encounters. This is how property
coverage works. In the output above, we see that both Chicken and Bird
have the roar() method. If you call the roar() method from winter, you will
be using a version of Amphibian that is closer to the object winter.
winter.roar()

Properties of a subclass have precedence over properties of the same name


of the parent class, which is the key to property overrides.
It is important to note that the above are all operations that invoke
properties. If you do the assignment, then Python doesn’t have to drill down
into layers. Here’s how to create a new Amphibian class object rainy, and
how to modify attributes such as trunk with rainy:
rainy= Amphibian(3)
rainy.trunk = False
print(winter.trunk)

Although rainy modifies the trunk attribute value, it does not affect
Animal’s class attribute. When we look at rainy’s object properties using
the following method, we see that a new object property named trunk is
created.
Print(rainy.__dict__)

Instead of relying on inheritance, we can directly manipulate the properties


of an ancestor class, such as:
Animal.trunk = 3

Its equivalent to modifying Bird’s:


Animal.__dict__["trunk"] = 3

Features
There may be dependencies between different properties of the same object.
When a property is modified, we want other properties that depend on that
property to change at the same time. At this point, we cannot store attributes
in a static dictionary manner. Python provides a variety of ways to generate
attributes on the fly. One of these is called a property. A property is a
special property. For example, we added an adult feature to the Amphibian
class. When the age of the object exceeds 1, adult is true; otherwise, false.
Here is the programming code:
class Animal(object):
trunk= True
class Amphibian(Animal):
walk = False
def __init__(self, age):
self.age = age
def adultage(self):
if self.age > 1.0:
return True
else:
return False
adultery = property(adultage) # property is built-in
winter= Amphibian(2)
print(winter.adultery) #True
winter.age = 0.5
print(winter.adult) #False

The property is created using the built-in function property(). Property() can
load up to four parameters. The first three arguments are functions that set
what Python should do when it gets, modifies, and deletes features. The last
parameter is a property of the document, which can be a string, for
illustration purposes.
The upper num is a number, and the Neg is a property that represents the
negative number. When a number is definite, its negative number is always
definite. When we modify a negative number, the value of the number itself
should also change. These two points are implemented by get() and set().
Del() indicates that if you delete the feature Neg, then the action that should
be performed is Delete attribute value. The last parameter of property()
(“I’m negative”) is the documentation for the feature neg.

_getatr_() method
In addition to the built-in function property, we can also use (self, name) to
query for properties that are generated on the fly. When we call a property
of an object, if the property cannot be found by mechanism, Python will call
the() method of the object to generate the property immediately, such as:
Each feature needs its own handler, and() can handle all the instant
generated properties in the same function. () can handle different properties
depending on the function name. For example, when we queried for the
attribute name male above, we threw an error of the AttributeError class.
Note that () can only be used to query for properties that are not in the
system.
(self, name, value) and (self, name) can be used to modify and delete
attributes. They have a wider range of applications and can be used for any
attribute.
Real-time attribute generation is a very interesting concept. In Python
Development, you might use this approach to manage the properties of
objects more reasonably. There are other ways to generate properties on the
fly, such as using the descriptor class. Interested readers may refer to this
code below.
class Animal(object):
trunk= True
class Amphibian(Animal):
walk = False
def __init__(self, age):
self.age = age
def __getattr__(self, name):
if recognition== "old":
if self.age > 1.0:
return True
else:
return False
else:
raise AttributeError(name)
winter = Amphibian(2)
print(winter.old) #True
winter.age = 0.5
print(winter.adult) #False
print(winter.male) # AttributeError

Dynamic Type
Dynamic Typing is another important core concept of Python. As I said
earlier, Python variables do not need to be declared. When assigning a
value, a variable can be reassigned to any other value. Python variables
change from wind to wind. The ability of sand is the embodiment of
dynamic type. Let’s start with the simplest assignment statement:
c= 3

In Python, the integer 3 is an object. The object’s name is “c”. But more
precisely, an object name is actually a reference to an object. An object is an
entity stored in memory. But we don’t have direct access to the subject. An
object name is a reference to that object. Manipulating Objects by reference
is like picking up beef from a hot pot with chopsticks. The object is beef,
and the object name is the good pair of chopsticks.
With the built-in function ID(), we can see which object the reference points
to. This function returns the object number.
c= 3
print(id(3))
print(id(c))
As you can see, after the assignment, object 3 and reference c return the
same number.
In Python, an assignment is simply to use the object name as a chopstick to
pick up other food. Each time we assign a value, we let the reference on the
left point to the object on the right. A reference can point to a new object at
any time:
c= 5
print(id(c))
c= "for"
print(id(c))

In the first statement, 3 is an integer object stored in memory. By


assignment, the reference a points to object 5. In the second statement, an
object “at” is created in memory, which is a string. The reference a points to
“for”. By returning the ID() twice, we can see that the object to which the
reference is pointing has changed. Since the variable name is a reference
that can be changed at any time, its type can naturally be changed
dynamically in the program. Therefore, Python is a dynamically typed
language.
A class can have more than one equal object. For example, two long strings
can be different objects, but their values can be equal.
In addition to printing the ID directly, we can also use the is operation to
determine whether two references point to the same object. But for small
integers and short strings, Python caches these objects instead of frequently
creating and destroying them. Therefore, the following two references point
to the same integer object 5.

Mutable and Immutable Objects


With the first two statements, we let c and d point to the same integer object
5. Where c & d is meant to point the reference d to the object referred to by
the reference c. We then manipulate the object, adding 3 to c, and assigning
it to d. As you can see, a points to integer object 8, while d still points to
object 5. Essentially, the addition operation does not change the object 5.
Instead, Python just points a to the result of the addition -- Another object,
8. It’s like a magic trick to turn an old man into a young girl. In fact, neither
the old man nor the young girl has changed. It’s just a girl on an old man’s
stage. In this case, it’s just a reference point. Changing a reference is not
Can Affect the direction of other references. In effect, each reference is
independent and does not affect the other.
Below is the code:
c= 5
print(id(c))
d= c
print(id(c))
print(id(d))
c= c+ 3
print(id(c))
print(id(8))
print(id(d))
When we changed LIST1, the contents of list2 changed. There seems to be
a loss of independence between references. It’s actually not a contradiction.
Because the directions of LIST1 and LIST2 have not changed, they are still
the same list. But a list is a collection of multiple references. Each element
is a reference, such as list1[0], list1[1], and so on. Each reference points to
another object, such as 23,45,76. And LIST1[0]10, the assignment, is not
changing the direction of list1, but LIST1[0]. Hence, the direction of an
element, which is part of the list object, changes. Therefore, all references
to this list object are affected.
Therefore, when you manipulate lists, if you change an element by
reference to an element, the list object itself changes (in-place change).
Lists are objects that can change on their own, called Mutable objects. The
dictionary we’ve seen before is also a variable data object. But previous
integers, floating-point numbers, and strings cannot change the object itself.
An assignment can only change the direction of the reference at most. Such
objects are called Immutable objects. Tuples contain multiple elements, but
these elements cannot be assigned at all, so they are immutable data objects.
Below is the code:
list2 = [23,45,76]
list1 = list2
list1[0] = 10
print(list2)

Look at the Function Parameter Passing from the Dynamic


Type
The parameter x is a new reference. When we call function F, a is passed as
data to the function, so x will point to the object referred to by a, which is
an assignment. If a is an immutable object, then references a and x are
independent of each other, meaning that the operation on parameter x does
not affect references a.
In the function above, a points to a variable list. When the function is
called, a passes the pointer to the parameter X. At this point, both
references to a and x point to the same mutable list. As we saw earlier,
manipulating a mutable object by a reference affects other references. The
results of the program run also illustrate this point. When you print a, the
result becomes [100,2,3]. That is, the action on the list inside the function is
“seen” by the external reference A. Be aware of this problem when
programming.
Below is the code:
def f(x):
print(id(x))
x = 50
print(id(x))
a=2
print(id(a))
f(a)
print(a)

Memory Management in Python


1. Reference Management
Language memory management is an important aspect of language design
and an important determinant of language performance. Whether it’s C
Manual language management or Java garbage collection, is the most
important feature of the language. Take the Python language as an example
to illustrate how memory is managed in a dynamically typed, object-
oriented language.
First, let’s make it clear that object memory management is based on the
management of references. We’ve already mentioned that in Python,
references are separated from objects. An object can have multiple
references, and each object has a total number of references to that object,
the Reference Count. We can use getrefcount() in the sys package in the
standard library to see the reference count of an object. Note that when you
pass a reference to getrefcount() as a parameter, the parameter actually
creates a temporary reference. Therefore, getrefcount() gets more than 1.

2. Garbage Collection
If you eat too much, you’ll get fat, and so will Python. As the number of
objects in Python increases, they take up more and more memory. But you
don’t have to worry too much about Python’s size, as it will be smart
enough to “lose weight” and start recycling in due course Garbage
Collection, which purges objects of no use. Garbage collection mechanisms
exist in many languages, such as Java and Ruby. While the ultimate goal is
to be slim, weight loss programs vary greatly from language to language.
In principle, when the reference count of an object in Python drops to zero,
meaning that there are no references to the object, the object becomes
garbage to be recycled. For example, if a new object is assigned to a
reference, the reference count of the object becomes 1. If the reference is
deleted and the object has a reference count of 0, then the object can be
garbage collected.
The code is below:
variable= [12, 24, 36]
del variable

After del variable, there is no reference to the previously created table


[12,24,36], which means that the user can not touch or use the object in any
way. If this object stays in memory, it becomes unhealthy fat. When
garbage collection is started, Python scans the object with a reference count
of 0, emptying the memory it occupies.
Weight loss, however, is an expensive and laborious process. When garbage
is collected, Python cannot perform other tasks. Frequent garbage collection
can drastically reduce Python’s productivity. If there are not many objects
in memory, it is not necessary to start garbage collection frequently.
Therefore, Python will only automatically start garbage collection under
certain conditions. When Python runs, the number of times an Object
Allocation and an Object Deallocation are logged. Garbage collection starts
when the difference between the two is above a certain threshold.
In addition to the basic recycling approach described above, Python also
employs a Generation recycling strategy. The basic assumption of this
strategy is that objects that live longer are less likely to become garbage in
later programs. Our programs tend to produce large numbers of objects,
many of which are quickly created and lost, but some of which are used
over time. For reasons of trust and efficiency, we believe that such “long-
lived” objects can still be useful, so we reduce the frequency with which
they are scanned in garbage collection.
Chapter 12: Exception Handling
This chapter deals with debugging and exception handling in detail. First of
all, we will start with a small introduction about a bug to get a good
overview of the topic.

What Is a Bug?
Bugs must be the most hated creatures a programmer can have. A bug in the
programmer’s eyes is a bug in a program. These bugs can cause errors or
unintended consequences. Many times, a bug can be fixed after the fact.
There are, of course, irremediable lessons. The European Ariane 5 rocket
exploded within a minute of its first launch. An after-action investigation
revealed that a floating-point number in the navigator was to be converted
to an integer, but the value was too large to overflow. In addition, a British
helicopter crashed in 1994, killing 29 people. The investigation revealed
that the helicopter’s software system was “full of flaws.” In the movie
2001: A Space Odyssey, the supercomputer HAL kills almost all of the
astronauts because of two goals in its program conflict.
In English, bug means defect. Engineers have long used the term bug to
refer to mechanical defects. And there’s a little story about using the word
bug in software development. A moth once flew into an early computer and
caused a computer error. Since then, bugs have been used to refer to bugs.
The moth was later posted in a journal and is still on display at the National
Museum of American History.
Code:
for result in range(5)
print(result)
# Python does not run this program. It will alert you to grammatical errors:

Output is:
SyntaxError: invalid syntax

There are no syntax errors in the following program, but when Python is
run, you will find that the subscript of the reference is outside the scope of
the list element.
result= [12, 24, 36]
print(result[4])
# The program aborts the error reporting

Output:
IndexError: list index out of range

The above type of Error that the compiler finds only at Runtime is called
the Runtime Error. Because Python is a dynamic language, many operations
must be performed at run time, such as determining the type of a variable.
As a result, Python is more prone to run-time errors than a static language.
There is also a type of Error called a Semantic Error. The compiler thinks
that your program is fine and can run normally. But when you examine the
program, it turns out that it’s not what you want to do. In general, such
errors are the most insidious and the most difficult to correct. For example,
here’s a program that prints the first element of a list.
mix = ["first", "second", "third"]
print(mix[1])

There is no error in the program, normal print. But what you find is that you
print out the second element, B, instead of the first element. This is because
the Python list starts with a subscript from 0, so to refer to the first element,
the subscript should be 0, not 1.

Debugging
The process of fixing a bug in a program is called debugging. Computer
programs are deterministic, so there is always a source of error. Of course,
sometimes spending a lot of time not being able to debug a program does
create a strong sense of frustration, or even a feeling that you are not
suitable for program development. Others slam the keyboard and think the
computer is playing with itself. From my personal observation, even the
best programmers will have bugs when they write programs. It’s just that
good programmers are more at peace with debugging and don’t doubt
themselves about bugs. They may even use the debug process as a kind of
training, to work with their computer knowledge by better understanding
the root cause of the error.
Actually, debugging is a bit like being a detective. Collect the evidence,
eliminate the suspects, and leave the real killer behind. There are many
ways to collect evidence, and many tools are available. For starters, you
don’t need to spend much time with these tools. By inserting a simple
print() function inside the program, you can see the state of the variable and
how far it has run. Sometimes, you can test your hypothesis by replacing
one instruction with another and seeing how the program results change.
When all other possibilities are ruled out, what remains is the true cause of
the error.
On the other hand, debug is also a natural part of writing programs. One
way to develop a program is Test-Driven Development (TDD). For Python
to be such a convenient, dynamic language, it’s a good place to start by
writing a small program that performs a specific function. Then, on the
basis of the small program, gradually modify, so that the program continues
to evolve, and finally, meet the complex requirements. Throughout the
process, you keep adding features, and you keep fixing mistakes. The
important thing is, you’ve been coding. The Python author himself loves
this kind of programming. So, debug is actually a necessary step for you to
write the perfect program.

Exception Handling in Detail


For errors that may occur at run time, we can deal with them in the program
in advance. This has two possible purposes: one is to allow the program to
perform more operations before aborting, such as providing more
information about the error. The other is to keep the program running after
it makes a mistake.
Exception handling can also improve program fault tolerance. The
following procedure uses the exception handling:
The program that requires exception handling is wrapped in a try structure.
Except explains how the program should respond when a particular error
occurs. Program, input() is a built-in function to receive command-line
input. The float() function is used to convert other types of data to floating-
point numbers. If you enter a string, such as “P”, it will not be converted to
a floating point number, and trigger ValueError, and the corresponding
except will run the program that belongs to it. If you enter 0, then dividing
by 0 will trigger ZeroDivisionError. Both errors are handled by the default
program, so the program does not abort.
The complete syntax for exception handling is:
try:
... ( code should be written here)
except exception1:
... ( code should be written here)
except exception2:
... ( code should be written here)
else:
... ( code should be written here)
finally:
...
Chapter 13: Python Web Programming
This chapter briefly explains web programming with Python. The Internet is
a basic entity for crores of people now. Explanation of web modularity with
Python can help you to learn the subject better. However, we will just go
through the basics for now. For advanced web programming, please follow
books with precise information.

HTTP Communication Protocol


Communication is a wonderful thing. It allows information to be passed
between individuals. The animals send out the chemical element and mating
messages. People say sweet things to express their love to their lovers. The
hunters whistled and quietly rounded up their prey. The waiter barked to the
kitchen for two sets of fried chicken and beer. Traffic lights direct traffic,
television commercials broadcast, and the Pharaoh’s pyramids bear the
curse of forbidden entry. With communication, everyone is connected to the
world around them. In the mysterious process of communication, the
individuals involved always abide by a specific protocol. In our daily
conversation, we use a set grammar. If two people use different grammars,
then they communicate with different protocols, and eventually, they don’t
know what they’re talking about.
Communication between computers is the transfer of information between
different computers. Therefore, computer communication should also
follow the Communication Protocol Conference. In order to achieve multi-
level global Internet communication, computer communication also has a
multi-level protocol system. HTTP Protocol is the most common type of
network protocol. Its full name is the Hypertext Transfer Protocol or
Hypertext Transfer Protocol.
The HTTP protocol enables the transfer of files, especially hypertext. In the
Internet age, it is the most widely used Internet Protocol. In fact, when we
visit a Web site, we usually type an HTTP URL into the browser, such as
http://www.google.com, for example, says that you need to use the HTTP
protocol to access your site.
HTTP works as a fast food order:
1. REQUEST: A customer makes a request to the waiter for a chicken
burger.
2. Response: The server responds to the request of the customer according
to the situation.
Depending on the situation, the waiter may respond in a number of ways,
such as:
The waiter prepares the DRUMSTICK Burger and hands it to
the customer. (everything is OK)
The waitress found herself working at the dessert stand. He sent
his customers to the official counter to take orders. (redirects)
The waiter told the customer that the drumstick hamburger was
out. (cannot be found)

When the transaction is over, the waiter puts the transaction behind him and
prepares to serve the next customer.
GET /start.html HTTP/3.0
Host: www.mywebsite.com

In the starting line, there are three messages:


Get method. Describes the operation that you want the server to
perform.
/ start. The path to the html resource. This points to the index on
the server. HTML file.
HTTP 3.0. The first widely used version of HTTP was 3.0, and
the current version is 3.3.

The early HTTP protocol had only the GET method. Following the HTTP
protocol, the server receives the GET request and passes the specific
resource to the client. This is similar to the process of ordering and getting a
Burger from a customer. In addition to the GET method, the most common
method is the POST method. It is used to submit data from the client to the
server, with the data to be submitted appended to the request. The server
does some processing of the data submitted by the POST method. The
sample request has a header message. The type of header information is
Host, which indicates the address of the server you want to access.
After receiving the request, the server will generate a response to the
request, such as:
HTTP/3.0 200 OK
Content-type: text/plain
Content-length: 10
Jesus Christ

The first line of the reply contains three messages:


HTTP 3.0: Protocol version
200: Status Code
Ok: Status Description

Ok is a textual description of the status code 200, which is just for human
readability. The computer only cares about three-digit status codes. Status
Code, which is 200 here. 200 means everything is OK, and the resource
returns normally. The status code represents the class that the server
responded to. There are many other common status codes, such as:
302, Redirect: I don’t have the resources you’re looking for here,
but I know another place where xxx does. You can find it there.

404, Not Found: I can’t find the resources you’re looking for.

The next line, Content-type, indicates the type of resource that the body
contains. Depending on the type, the client can start different handlers (such
as displaying image files, playing sound files, and so on). Content-length
indicates the length of the body part, in bytes. The rest is the body of the
reply, which contains the main text data.
Through an HTTP transaction, the client gets the requested resource from
the server, which is the text here. The above is a brief overview of how the
HTTP protocol works, omitting many details. From there, we can see how
Python communicates with HTTP.

http.client Package
The client package can be used to make HTTP requests. As we saw in the
previous section, some of the most important information for HTTP
requests are the host address, request method, and resource path. Just clarify
this information, plus http. With the help of the client package, you can
make an HTTP request.
Here is the code below in Python:
import http.client
connection = http.client.HTTPConnection("www.facebook.com") #hostaddress
conn.request("POST", "/") # requestmethod and resource path
response = connection.getresponse() # Gets a response
print(response.status, response.reason)# Replies with status code and description
content = response.read()
Conclusion
Thank you for making it through to the end of Learn Python Programming
! Let’s hope it was informative and able to provide you with all of the tools
you need to achieve your goals—whatever they may be.
The next step is to get practice with Python in detail. Remember that
programming is not an easy job. You need to master basics and use them to
build a house of knowledge that is well-structured and organized. You may
often get irritated with errors. However, always motivate yourself to work
hard. Try to utilize internet resources like stack overflow for increasing
your productivity if you are struck with any code.
Python is a great programming language for beginners due to its extensive
resources and projects in GitHub for beginners. Try to contribute back to
the Python community with all of your strength. Now, go program!
SQL

A Practical Introduction Guide to Learn Sql


Programming Language. Learn Coding
Faster with Hands-On Project. Crash Course
Guide for your Computer Programming

Introduction
Chapter 1: What is SQL?
What are Databases?
What is a DBMS (Data Management System)?
Types of DBMS
Advantages of Databases
Getting ready to code
Installing MySQL applications
Type of SQL server software versions:
What are the components that are in SQL software?
What to consider before installing the software?
Starting and Stopping Oracle database instances
Writing the First MySQL code
Administrate the database
Object explorer
Creating databases
Modify tables
Manage tables
Delete tables
Schema creation
Creating Data
Inserting data
Chapter 2: SQL Server and Database Data Types
Chapter 3: Creating Your First Database And Table
Step 1: SQL Server Management Studio Software Installation
Step 2: Launch the SQL Studio
Step 3: Identify Database Folder
Step 4: Create a New Database
Step 6: Developing the Primary Key
Step 7: Structure the Tables
Step 8: Creating Other Columns
Step 9: Saving the Table
Step 10: Add Data
Step 11: Running the Table
Step 12: Data Querying
Chapter 4: Creating Your First Database and Table Using Command
Line
Chapter 5: SQL Views and Transactions
Chapter 6: A Look at Queries
Chapter 7: SQL Tools and Strategies
Working with the Queries
Working with the SELECT Command
A Look at Case Sensitivity
Chapter 8: Exercises, Projects And Applications
Examples of Exercises in SQL
Projects in SQL Programming
Applications of SQL
Data Integration
Analytical Queries
Data Retrieval
Chapter 9: Common Rookie Mistakes
DATEADD ( datepart, number, date)
DATEDIFF Function
Datename Function
Chapter 10: Tables
Create tables
Deleting Tables
Inserting Data into a Table
Dropping a Table
Using the ALTER TABLE Query
Chapter 11: The Database
Chapter 12: Tips and tricks of SQL
Four Tips That Make Using SQL Easier!
Chapter 13: Database Components
Database Tables
Rows and NULL values
Primary Keys
Foreign Keys
Stored Procedures
Chapter 14: Working With Subqueries
The SQL Server Subquery
Creating New Databases in SQL Programming
Industries That Use SQL Programming
Common SQL Database Systems
The Relevance of the Database System to Social Networks
Conclusion
Introduction
SQL is a programming language that stands for ‘Structured Query
Language,’ and it is a simple language to learn considering it will allow
interaction to occur between the different databases that are in the same
system. This database system first came out in the 70s, but when IBM came
out with its own prototype of this programming language, then it really
started to see a growth in popularity and the business world started to take
notice.
The version of SQL that was originally used by IBM, known back then as
ORACLE, was so successful that the team behind it eventually left IBM
and became its own company. ORACLE, thanks to how it can work with
SQL, is still one of the leaders in programming languages and it is always
changing so that it can keep up with everything that is needed in the
programming and database management world.
The SQL is a set of instructions that you can use to interact with your
relational database. While there are a lot of languages that you can use to do
this, SQL is the only language that most databases can understand.
Whenever you are ready to interact with one of these databases, the
software can go in and translate the commands that you are given, whether
you are giving them in form entries or mouse clicks. These will be
translated into SQL statements that the database will already be able to
interpret.
If you have ever worked with a software program that is database driven,
then it is likely that you have used some form of SQL in the past. It is likely
that you didn't even know that you were doing this though. For example,
there are a lot of dynamic web pages that are database driven. These will
take some user input from the forms and clicks that you are making and
then will use this information to compose a SQL query. This query will then
go through and retrieve the information from the database to perform the
action, such as switch over to a new page.
To illustrate how this works, think about a simple online catalog that allows
you to search. The search page will often contain a form that will just have
a text box. You can enter the name of the item that you would like to search
using the form and then you would simply need to click on the search
button. As soon as you click on the search button, the web server will go
through and search through the database to find anything related to that
search term. It will bring those back to create a new web page that will go
along with your specific request.
For those who have not spent that much time at all learning a programming
language and who would not consider themselves programmers, the
commands that you would use in SQL are not too hard to learn. Commands
in SQL are all designed with a syntax that fits in with the English language.
At first, this will seem really complicated, and you may be worried about
how much work it will be to get it set up. But when you start to work on a
few codes, you will find that it is not actually that hard to work with. Often,
just reading out the SQL statement will help you to figure out what the
command will do. Take a look at the code below:
How this works with your database
If you decide that SQL is the language that you will work on for managing
your database, you can take a look at the database. You will notice that
when you look at this, you are basically just looking at groups of
information. Some people will consider these to be organizational
mechanisms that will be used to store information that you, as the user, can
look at later on, and it can do this as effectively as possible. There are a ton
of things that SQL can help you with when it comes to managing your
database, and you will see some great results.
Chapter 1 What is SQL?
Databases are an important part of computer technology that has expanded
in wide growth from the past few decades. Every internet or web
application consists of data that needs to be shown and collected. A lot of
technologies nowadays need to store bundles of information and process
them according to user needs within very less time (In microseconds).
What are Databases?
If you need to express a layman definition about a database then here it is
for your understanding.
" Database is something that the stores information which is canonically
known as data. Data may be of any kind. For example, image, text, and
video to name a few."
A practical example of a database:
Consider a library. They usually consist of books and people come to Loan
books. Libraries in the world days used to create bundles of registers to
track the books that are Loaned by the civilians. However, after the advent
of computers things became easier with Libraries getting equipped with
Library management systems. This software usually stores every detail of
the book, Library members and details about the books that are on hold. All
of this information is stored in the databases in the cloud and Library
members can easily access their Library account online with encrypted
safety.
Small Exercise:
I hope you got a good overview of the databases. You will notice how
important they are in the present technological world.
What is a DBMS (Data Management System)?
We usually have data that is sensitive in the databases and there need to be
some regulations to easily and logically manipulate this data with the help
of a system. For this reason, years back computer scientists designed
Database management systems to manipulate, create and erase the data.
With DBMS systems we usually achieve complete authority of the data we
possess and can use them logically.
Types of DBMS
There are various types of DBMS technologies that flourished in the
computer arena in the past decades such as Hierarchal and Network DBMS.
However, they did not prove to be feasible for many reasons. The most
successful database management system is Relation DBMS which
represents the data using Tables. More advanced applications use Object-
oriented DBMS to process the information. However, this book technically
deals with Relational DBMS.
Advantages of Databases
As said before databases are everywhere in the real world nowadays and
have increased their capabilities tremendously. Even smartphones inbuilt
databases to store information and use them for applications. A lot of travel
sources such as Railways, Airlines extensively depend on databases to
categorically obtain real-time information. Databases are also stored on the
internet as a cloud and can decrease costs for small companies. A lot of
multinational companies like Google, Amazon collect tons of information
and store them in remote databases to test and develop machine learning
algorithms.
Just like programming languages databases need a language to effectively
query and update information in the databases. For this exact purpose,
Database query languages are developedWhat are Database Query
Languages?
There is no use of data that cannot be used and manipulated. For this exact
purpose, Database query languages are developed. They usually use two
principle concepts known as Data definition language (DDL) and Data
Manipulation Language ( DML) to query and update the information. The
most popular and preferred database query language is SQL which this
book deals about.
Getting ready to code
Just like any other programming language you need to understand the
essence of the language you are trying to learn before diving into it. Make a
clear note about the advantages of SQL and note down common errors that
you may get encountered with.
Make a list of necessary applications to be installed and check system
requirements for all of that software.
Installing MySQL applications
MySQL is one of the tools that Microsoft offers for countless organizations
and small businesses. There are several costly certification courses
available for a better understanding of the management of these databases.
Here in this book we coherently explain to you the installation procedure
after explaining the types of SQL Server editions that are available.
Type of SQL server software versions:
Microsoft offers a wide range of software availabilities for better
deployment of the services. It is useless to buy huge resources if there is no
enough data. It also doesn’t make sense to rely on fewer resources for huge
data as there will be continuous downtime which may irritate your service
users. For this exact reason, MySQL is available in different types as
described below.
a) Express
This is a small comprehensive edition where it consists of simple resources
and workbench tools that are needed to start running for an individual or a
small enterprise.
b)Standard
Standard version is usually recommended for small business that has
moderate data to be handled. It can quickly monitor the operations and
processes data faster when compared with the express edition
c)Enterprise
Enterprise version is one of the most used SQL servers all around the
world. It has a wide range of capabilities and can be used by moth medium
and large companies. It consists of a lot of services such as reporting,
analysis, and auditing. Enterprise version costs a lot more than the normal
versions due to its complex nature.
d)Developer
This version of the SQL server is especially used by developers to test and
manipulate the functionalities in an environment. All the developers use this
version to experiment with the logical entity they are dealing with. You can
easily convert the license of your developer version to an enterprise version
with a click
e)Web
This version of the SQL server is primarily developed to deal with web
applications. A lot of medium hosting services use this version to store
small amounts of data.
What are the components that are in SQL software?
SQL Server software consists of various components that need to be
installed. Here are some of the most used SQL server managers.
a) Server management system
This is the most important component where every server that is present is
registered, created, managed or erased if needed. This is a working interface
for database administrators to interact with the databases that are operating.
b) Server configuration manager
This component helps us to give customized information about network
protocols and parameters in detail.
c)SQL Server profiler
This is the component that helps you monitor all the activities that are
running through the system.
What to consider before installing the software?
1. Do thorough research about the software you are installing. You can
further refer to the documentation or online resources to check minimum
system requirements and components information.
2. Learn about components. Get a good understanding of different
components such as analysis, reporting that SQL enterprise version offers.
Nowadays many industries must analyze and report the findings as a
database administrator.
3. Learn about authentication modes that are present and do good research
about them.
Starting and Stopping Oracle database instances
The process of starting an Oracle database instance is divided into three
steps: starting the instance, loading the database, and opening the database.
Users can start the database in different modes according to actual needs.
An Oracle database instance must read an initialization parameter file at
startup to obtain parameter configuration information about the instance
startup. If the pfile parameter is not specified in the startup statement,
Oracle first reads the server initialization parameter file spfile at the default
location. If no default server initialization parameter file is found, the text
initialization parameter file at the default location will be read.
The following will explain several STARTUP modes listed in startup syntax
respectively.
1.NOMOUNT model
The code and running results are as follows:
SQL> connect system/sample instance as connect;
2.MOUNT mode
This mode starts the instance, loads the database, and keeps the database
closed.
When starting the database instance to MOUNT mode, the code and
running results are as follows.
SQL> shutdown {Enter the condition}
3.OPEN mode
This mode starts the instance, loads and opens the database, which is the
normal startup mode. Users who want to perform various operations on the
database must start the database instance using OPEN mode.
Start the database instance to OPEN mode, and the code and running results
are as follows.
SQL> startup
Like starting a database instance, shutting down the database instance is
also divided into three steps, namely shutting down the database,
uninstalling the database, and shutting down the Oracle instance.
The SQL statement for the shutdown is here:
SHUTDOWN [Enter the parameter here]
1.NORMAL approach
This method is called a normal shutdown. If there is no limit on the time to
shut down the database, it is usually used to shut down the database.
The code and running results are as follows:
SQL> shutdown normal;
Database shutdowns immediately with the syntax.
2.TRANSACTIONAL approach
This method is called transaction closing. Its primary task is to ensure that
all current active transactions can be committed and shut down the database
in the shortest possible time.
3.ABORT mode
This method is called the termination closing method, which is mandatory
and destructive. Using this method will force any database operation to be
interrupted, which may lose some data information and affect the integrity
of the database. Apart from using the database because it cannot be shut
down using the other three methods, this method should be avoided as
much as possible
Launching MySQL workbench
MySQL Workbench is a visual database design software released by
MySQL AB; whose predecessor was DB Designer of FabForce. MySQL
Workbench is a unified visualization tool designed for developers, DBA
and database architects. It provides advanced data modeling, flexible SQL
editor and comprehensive management tools. It Can be used on Windows,
Linux, and Mac.
To use it in windows, click on the SQL server studio and select the option
workbench to open the interface. If you are using Linux and Mac you need
to enter the command ‘workbench’ after entering into the SQL instance.
Writing the First MySQL code
First of all, when you are trying to write your first MySQL code it is
important to get a good text editor for writing down the queries. SQL
Software's provide query writing tools for its users usually. It is important to
learn about syntax in detail. The first written code is always clumsy and can
easily make you lose patience. However, remember that there is a lot to
learn before actually making things better with your code.
Definition of Database
This chapter is a clear introduction to the usage of the SQL management
server and performing different operations for the system characteristics.
We will start with creating tables and travel till creating triggers using the
SQL server management studio. This chapter is pretty straightforward and
will introduce you to the interfaces that SQL software offers. As we move
on further, we will discuss various advanced SQL query statements using
Data definition and data manipulation languages. For now, let us learn
about all the capabilities of the SQL server management system.
The server management system consists of all the servers listed on the
instance and we need to click on the server we are willing to work with.
When we click on the server it will ask authorization details. Authentication
is important for the security of databases. So, unless you give the correct
credentials you will not be able to connect to the server and all the
databases it possesses.
Administrate the database
After connecting to the database, you have full control over the data that is
present. You can look at all the objects for a particular database using the
object explorer. If your server is not registered you need to register your
server with Microsoft for making it work for the first time. Administration
capabilities will also give you the power to change the authentication details
of the database.
Object explorer
Object Explorer is a tree view of all the instances and entities that are
present. They are placed hierarchically and it is easy to operate the
databases in that way. You can even connect to multiple databases with the
help of an object explorer that is present. Object Explorer is one of the
friendliest user interfaces in the SQL server studio.
Create new server groups
Usually, SQL offers users to create groups with the databases present. You
can also provide permissions to particular users to access the database. In
advanced application systems, this is replaced with encrypted key
authentication.
Start and stop servers
You can easily start or stop databases present using the options available for
the instances. All you need to do is to press the stop button to end the
database initiation.
Creating databases
First of all, go to the object explorer and select the server you wish for.
After getting authenticated click on the lift pane and it will give a list of
options. In those options select ‘create a new database’ to create a database
instance. You can even select object properties and change some of the
information manually. For example, you can input data type information.
You can even provide the number of rows and columns for the databases.
After clicking the create button on the interface the changes will take place
immediately.
Modify tables
You can easily change the number of columns in the table using the SQL
server management studio. The object pane offers an option called as
modify. You can change all the properties of the database and it will take
change immediately within split seconds.
With this modification option, you can change the primary key and foreign
key values so that you can do complex queries.
Manage tables
Management is altogether a different option in the SQL server software.
When you click on the manage instance all the properties will be displayed.
You can look at all the data values and types available and can change them.
For example, you can change the length of the columns and change the data
type of a column from decimal to integer. Everything happens in a single
click.
You can even see the ER diagram of the databases to understand the
relationship between the entities and instances.
Delete tables
Deleting also works in the same way as managing and creating. First of all,
select the table you are looking forward to delete. Right-click on the table
and click on the delete instance option. After entering the option, you will
get a prompt to whether delete or not. If you click yes then all the data in
that table will be deleted at an instance. It is not possible to restore the
deleted columns from SQL server management software. However, you can
restore the deleted tables using recovery models available in the SQL
server.

Schema creation
Schema is a set of instructions that the database possesses. People usually
get confused with the term schema but it is simply a customized database. It
is always recommended to create a schema because you will be well aware
of the options in the instance. Schema creation is a good practice for
database administrators.
To create a schema all you need to do is enter the database objects and click
on the option that says create a schema. When you click this option all of
the properties of the column will be given and you need to manually enter
them. This is how a schema is created and there are even options in the
interface to delete that schema.

Creating Data
Entering data into the database or table is one of the most important
understanding all database administrators should be aware of. To create data
using the SQL server you need to start understanding the essence of the
system.
All you need to do is enter the option to create data and fill it with column
values of your choice. You can even create data belonging to different data
types using this functionality.

Inserting data
Data should be inserted to go on with the queries. It is usually not possible
to huge data from the SQL server software because of the complexities that
may arise. However, if you are reluctant to insert data then we will explain
an easy way.
First of all, create an excel document and fill the values in the columns. Use
this file and click on the insert data option present in the sideline of the
database option. After that, all of the values will be inserted and can be
easily updated with other available options.
This chapter is just a small introduction to the SQL server software. We
have potentially introduced some of the concepts that databases usually deal
with.
Chapter 2 SQL Server and Database Data
Types
To be able to hold data in certain columns, SQL Server and other relational
database management systems utilize what are called “data types.”
There are different data types available, depending on what data you plan to
store.
For instance, you may be storing currency values, a product number and a
product description. There are certain data types that should be used to store
that information.
The majority of the data types between each RDBMS are relatively the
same, though their names differ slightly, like between SQL Server and
MySQL. There are a lot of data types, though some are more frequently
used than others. The following is a list of common ones that you may find
or work with.
The AdventureWorks2012 database will be used as an example.
VARCHAR
This is an alphanumeric data type, great for holding strings like first and
last names, as well as an email address for example. You can specify the
length of your varchar data type like so when creating a table,
VARCHAR(n). The value of ‘n’ can be anywhere from 1 to 8,000 or you
can substitute MAX, which is 2 to the 31st power, minus 1. However, this
length is rarely used.
When designing your tables, estimate the length of the longest string plus a
few bytes to be on the safe side. If you know that the strings you will be
storing will be around 30 characters, you may want to specify
VARCHAR(40) to be on the safe side.
This data type is flexible in a sense to where it will fit only the characters
entered into it, even if you don’t insert 40 characters like in the example
above.
However, there is a bit of overhead with storage, as it will add 2 bytes to
your entire string. For instance, if your string is 10 bytes/characters in
length, then it will be 12 in all actuality.
NVARCHAR
Much like the varchar data type, this is alphanumeric as well. However, it
also stores international characters. So this is a good option if you end up
using characters and letters from another country’s language.
The other difference between VARCHAR and NVCARCHAR is that
NVARCHAR’s values go up to 4,000 instead of 8,000 like VARCHAR.
Though they are the same in how they are defined in length like so:
NVARCHAR(n) where ‘n’ is the length of characters.
EXACT NUMERICS
There are various number data types that can be used to represent numbers
in the database. These are called exact numbers.
These types are commonly used when creating ID’s in the database, like an
Employee ID for instance.
Bigint – Values range from -9,223,372,036,854,775,808 to
9,223,372,036,854,775,807, which isn’t used so frequently.
Int – most commonly used data type and its values range from
-2,147,483,648 to 2,147,483,647
Smallint – Values range from -32,768 to 32,767
Tinyint – Values range from 0 to 255
In any case, it’s best to pick the data type that will be the smallest out of all
of them so that you can save space in your database.
DECIMAL
Much like the exact numeric data types, this holds numbers; however, they
are numbers including decimals. This is a great option when dealing with
certain numbers, like weight or money. Decimal values can only hold up to
38 digits, including the decimal points.
Let’s say that you wanted to enter $1,000.50 into your database. First, you
would change this value to 1000.50 and not try to add it with the dollar sign
and comma. The proper way to define this value per the data type would be
DECIMAL(6,2).
FLOAT
However, this is more of an Approximate Numeric, meaning it should not
be used for values that you do not expect to be exact. One example is that
they are used in scientific equations and applications.
The maximum length of digits that can be held within a column while using
this data type is 128. Though, it uses scientific notation and its range is
from -1.79E + 308 to 1.79E + 308. The “E” represents an exponential value.
In this case, its lowest value is -1.79 to the 308th power. Its max value is
1.79 to the 308th power (notice how this is in the positive range now).
To specify a float data type when creating a table, you’d simply specify the
name of your column and then use FLOAT. There is no need to specify a
length with this data type, as it’s already handled by the database engine
itself.
DATE
The DATE data type in SQL Server is used quite often for storing dates of
course. Its format is YYYY-MM-DD. This data type will only show the
month, day and year and is useful if you only need to see that type of
information aside from the time.
The values of the date data type range from ‘0001-01-01’ to ‘9999-12-31’.
So, you have a lot of date ranges to be able to work with!
When creating a table with a date data type, there’s no need to specify any
parameters. Simply inputting DATE will do.
DATETIME
This is similar to the DATE data type, but more in-depth, as this includes
time. The time is denoted in seconds; more specifically it is accurate by
0.00333 seconds.
Its format is as follows YYYY-MM-DD HH:MI:SS. The values of this data
type range between '1000-01-01 00:00:00' and '9999-12-31 23:59:59'.
Just as the DATE data type, there is no value or length specification needed
for this when creating a table. Simply adding DATETIME will suffice.
If you’re building a table and are deciding between these two data types,
there isn’t much overhead between either. Though, you should determine
whether or not you need the times or would like the times in there. If so,
then use the DATETIME data type, and if not, use the DATE data type.
BIT
This is an integer value that can either be 0, 1 or NULL. It’s a relatively
small data type in which it doesn’t take up much space (8 bit columns = 1
byte in the database). The integer value of 1 equates to TRUE and 0 equates
to FALSE, which is a great option if you only have true/false values in a
column.
Chapter 3 Creating Your First Database And
Table

Before having a successful database with practical tables in SQL


programming, both the creation of a database and then a table is required.
However, there are several SQL data application software out there, but all
have almost a similar step of creating a new database and tables. When you
create your first database system, you will then have to design a table where
you will feed your data and store it more securely and effectively. SQL
offers a free graphical user interface, and it is easy to create. The following
is a step by step guide on how to create your first SQL database and tables
before thinking of feeding your data.
Steps
Step 1: SQL Server Management Studio Software Installation
The firsts step in creating your first database and table is by acquiring the
SQL software available for free online from Microsoft. This software
comes fully packed, allowing you to interact and manage the SQL server
with limited command-line instructions. Besides, it is crucial when it comes
to using databases when in remote regions. Mac users can, however, utilize
open-source programs, for instance, SQuirrel SQL, to maneuver through the
Database system.
Step 2: Launch the SQL Studio
When you launch the SQL studio, the software occasionally requests a
server at first that you will prefer using or the one you are using presently.
If you have an already existing one, you may choose to input the
permissions, authenticate, and connect. Some may prefer local database
systems by setting a new name authenticate using a preferred name or
address. Launching the SQL server management studio begins the process
of interacting with the software and a path to creating your first database
and table.
Step 3: Identify Database Folder
Immediately after the connection is made on either the local or remote, a
Window will open on the left of the screen. On top, there will be a server
where it will connect to. If not, you may click on the icon '+,' which will
display multiple elements, including the option to create a new database. In
some versions, you may see the icon for creating a new database
immediately on the left drop-down Window. You can then click on 'Create
New Database.'
Step 4: Create a New Database
As mentioned in step 3, the drop-down menu will fully display multiple
options, including the one to create a new database. First, you will
configure the database according to your parameters as well as providing
the name for ease of identification. Most users prefer leaving the settings in
their default, but you can change them if you are familiar with how they
impact your process of data storage in the system. Note that when you
create the database name, two files will generate the data automatically and
log files. Data files are responsible for the storage of data while log files
track all the changes, modifications, and other alterations made in the
database.
Step 5: Create Your Tables
Databases often do not store data unless structures in forms of rows and
tables are created for that data to remain organized. Tables are the primary
storage units where data is held, but initially, you have to create the table
before you insert the information. Similar to creating a new database, tables
are also straightforward when creating. In the Databases folder, expand the
Window then right-click on Tables and choose 'Nee Table.' Windows will
open, displaying a table that can be easily manipulated towards the number
rows and columns, titles, and how you want to organize your work. In this
step, you will succeed in creating both the database and table, therefore,
moving forward in organizing your task.
Step 6: Developing the Primary Key
The primary key plays a significant role in databases as it acts as a record
number or ID for easy identification and remembrance when you view the
page later. As such, it highly recommended creating these keys in the first
column. There are many ways to do this and include entering the ID in the
column field by typing int and deselecting the 'Allow Nulls.' Select the key
icon found in the toolbar and marks it as the Primary key.
Step 7: Structure the Tables
Tables typically have multiple columns, also referred to as fields, and each
column represents one element of data entries. When creating your table,
you initially structured it to fit the number of data entries, therefore
essential for each dataset as other primary keys. Thus, the structuring
process will entail identifying each column with a given set of data. For
example, FirstName column, Last name, and address column, among
others.
Step 8: Creating Other Columns
Immediately you create the columns for primary keys; you will notice that
there appear more columns below it. These are not for primary keys but
essential for the insertion of other information. As such, ensure you input
the correct data for each column to avoid filling the table with the wrong
information. In the column, you will enter the 'nchar,' which is a data type
for text, 'int' used for whole numbers, and 'decimal' for storage of decimal
numbers.
Step 9: Saving the Table
After you finish creating the content in each field, you will notice that your
table will consist of rows and columns. However, you will need to first save
the table before entering the information. This can be done by selecting the
Save icon also in the toolbar and name your table. When naming your table,
ensure that you create a name that you can easily relate to the content or
recognize. Besides, databases with different tables should have different
names so that they can be identified easily.
Step 10: Add Data
Once the table is saved, you can now add the data into the system feeding
each field with relevant information. However, you can confirm if the table
is saved by expanding the Tables Folder and try to see if our table name is
listed. If not, use the Table Folder to refresh the tables, and you will see
your table. Back in the table, Right-click on the table where a drop dialog
box will appear and select 'Edit Top 200 Rows'. The Window will then
display fields for you to add data but ignore the primary keys as they will
fill automatically. Continue with the same process until when you enter the
last data in the table.
Step 11: Running the Table
After you have finished working on the table, you are to save the content so
that you do not lose your work. As the table is already saved, click on
'Execute SQL' on the toolbar when you have finished entering data, and it
will execute the process of feeding each data you entered into the columns.
The parsing process may take a few seconds, depending on a load of data. If
there are any errors in the feeding process, the system will show you where
you input data incorrectly. More so, you can execute the program parsing of
all the data by using the combination of 'ctrl' and 'R.'
Step 12: Data Querying
At this step, you have created your first database and table and successfully
saved the information through SQL language programming. The database is
now fully functional, and you henceforth create more tables within a single
database. However, there is a limit on how many tables per database, but
many users do not worry about this rule. You can, therefore, create new
database systems you want and create more tables. At this end, you can
query your data for reports or any relevant purposes, such as organizational
or administrative purposes. Often, having a general idea of SQL
programming, especially for putting it into practice in creating databases
and tables, allows you to advance your learning skills.
Chapter 4 Creating Your First Database and
Table Using Command Line
You can as well use SQL commands and statements to create databases and
tables. The same applies to SQL Server Management Studio like the above
guide, but commands and statements are used to give instructions to the
system to perform a given function. As to build your first database, you use
the command' SELECT DATABASE (database_name)’ and hitting the
execute button to create the program. The message on the screen should,
therefore, be 'Command(s) completed successfully," showing that your
database has been created.
As to use the database, run the command 'USE (database_name),' which
tells the query window to run the new database program. On the other hand,
creating a new table entails running the command 'CREATE TABLE
(table_name).' Entering data follows the command 'INSERT DATA INTO
(table_name), VALUES (table_name),' and repeating the same process for
all the datasets you have. The same also allows for viewing the data you
saved and includes the command format 'SELECT * FROM (table_name).
All the above commands are the critical ones when it comes to
maneuvering through different SQL databases. As such, it is always
essential to learn about each SQL basic commands to execute programs
readily.
Chapter 5 SQL Views and Transactions
A "Database View" in SQL can be defined as a “virtual table” or “logical
table” described as "SQL SELECT" statement containing join function. As
a "Database View" is like a table in the database consisting of rows and
columns, you will be able to easily run queries on it. Many DBMSs,
including “MySQL”, enable users to modify
information in the existing tables using "Database View" by meeting certain
prerequisites, as shown in the picture below.
A "SQL database View" can be deemed dynamic since there is
no connection between the "SQL View" to the physical system. The
database system will store "SQL Views" in the form
on "SELECT" statements using "JOIN" clause. When the information in the
table is modified, the "SQL View" will also reflect that modification.

Pros of using SQL View

A "SQL View" enables simplification of complicated


queries: a "SQL View" is characterized by a
SQL statement which is associated with multiple tables. To
conceal the complexity of the underlying tables from the end
users and external apps, "SQL View" is extremely helpful.
You only need to use straightforward "SQL" statements
instead of complicated ones with multiple "JOIN" clauses
using the "SQL View".
A "SQL View" enables restricted access to information
depending on the user requirements. Perhaps you would not
like all users to be able to query a subset of confidential
information. In such cases "SQL View" can be used to
selectively expose non-sensitive information to a targeted set
of users.
The "SQL View" offers an additional layer of safety. Security
is a key component of every "relational database
management system". The "SQL View" ensures “extra
security” for the DBMS. The "SQL View" enables generation
of a “read-only” view to display “read-only” information
for targeted users. In "read-only view", users are able to only
retrieve data and are not allowed to update any information.
The "SQL View" is used to enable computed columns.
The table in the database is not capable of containing
computed columns but a "SQL View" can
easily contain computed column. Assume in the
"OrderDetails" table there is "quantityOrder" column for
the amount of products ordered and "priceEach" column for
the price of each item. But the "orderDetails" table cannot
contain a calculated column storing total sales for every
product from the order. If it could, the database schema
may have never been a good design. In such a situation, to
display the calculated outcome, you could generate a
computed column called "total", which would be a product of
"quantityOrder" and "priceEach" columns. When querying
information from the "SQL View", the calculated column
information will be calculated on the fly.
A "SQL View" allows for backward compatibility. Assume
that we have a central database that is being used by
multiple applications. Out of the blue you have been
tasked to redesign the database accommodating the new
business needs. As you delete some tables and create new
ones, you would not want other applications to be affected
by these modifications. You could generate "SQL Views" in
such situations, using the identical schematic of the “legacy
tables” that you are planning to delete.
Cons of using SQL View
In addition to the pros listed above, the use of "SQL View" may have
certain disadvantages such as:

Performance: Executing queries against "SQL


View" could be slow, particularly if it is generated from
another "SQL View".
Table dependencies: Since a "SQL View" is created from
the underlying tables of the database. Anytime the tables
structure connected with "SQL View" is modified, you also
need to modify the "SQL View".
Views in MySQL server
As of the release of MySQL version 5+, it has been supporting database
views. In MySQL, nearly all characteristics of a "View" conform to the
"standard SQL: 2003".
MySQL can run queries against the views in couple of ways:

1. MySQL can produce a “temporary table” on the basis of the


“view definition statement” and then execute all following
queried on this "temporary table".
2. MySQL can combine the new queries with the query that
defines the “SQL View” into a single comprehensive query
and then this merged query can be executed.
MySQL offers versioning capability for all "SQL Views". Whenever a
“SQL View” is modified or substituted, a clone of the view is backed up in
the "arc (archive) directory" residing in a particular database folder.
The “backup file” is named as "view name.frm-00001". If you modify your
view again, then MySQL will generate a new “backup file” called "view
name.frm-00002."
You can also generate a view based on other views through MySQL, by
creating references for other views in the "SELECT" statement defining the
target "SQL View".
“CREATE VIEW” in MySQL
The “CREATE VIEW” query can be utilized to generate new “SQL Views”
in MySQL, as shown in the syntax below:
"CREATE
[ALGORITHM = {MERGE | TEMPTABLE | UNDEFINED}]
VIEW view_name [(column_list)]
AS
select-statement;"
View Name
Views and tables are stored in the same space within the database, so it is
not possible to give the same name to a view and a table. Furthermore, the
name of a view has to be in accordance with the naming conventions of the
table.
Creating a “SQL View” from another “SQL view”
The MySQL server permits creation of a new view on the basis of another
view. For instance, you could produce a view named "BigSalesOrder" based
on the "SalesPerOrder" view, that we created earlier to show every sales
order for which the total adds up to more than 60,000 using the syntax
below:
Check if an existing view is updatable
By running a query against the "is_updatable" column from the view in the
"information_schema" database, you should verify whether a view in
the database is updatable or not.
You can use the query below to display all the views from the
"classicmodels" database and check which of them are updatable:
"SELECT
table_name,
is_updatable
FROM
information_schema.views
WHERE
table_schema = 'classicmodels';"
The “result set” is displayed in the picture below:
Dropping rows using “SQL View”
To understand this concept, execute the syntax below to first create a table
called “items’, use the “INSERT” statement to add records into this table
and then use the “CREATE” clause to generate a view containing items
with prices higher than “700”.
"-- create a new table named items
CREATE TABLE items (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
price DECIMAL(11 , 2 ) NOT NULL
);
-- insert data into the items table
INSERT INTO items(name,price)
VALUES('Laptop',700.56),('Desktop',699.99),('iPad',700.50) ;
-- create a view based on items table
CREATE VIEW LuxuryItems AS
SELECT
*
FROM
items
WHERE
price > 700;
-- query data from the LuxuryItems view
SELECT
*
FROM
LuxuryItems;"
The “result set” is displayed in the picture below:
Now, use the “DELETE” clause to drop a record with “id value 3”.
"DELETE FROM LuxuryItems
WHERE
id = 3;"
After you run the query above, you will receive a message stating “1 row(s)
affected”.
Now, you can verify the data with the view, using the query below:
"SELECT
*
FROM
LuxuryItems;"
The “result set” is displayed in the picture below:

Finally, use the syntax below to retrieve desired data from the underlying
table “items” to confirm if the “DELET” statement in fact removed the
record:
"SELECT
*
FROM
items;"
The “result set” is displayed in the picture below, which confirms that the
row with “id value 3” was deleted from the “items” table:

Modification of “SQL View”


In MySQL, you can use “ALTER VIEW” and “CREATE OR REPLACE
VIEW” statements to make changes to an existing view.
Using “CREATE OR REPLACE VIEW” statement
The “CREATE OR REPLACE VIEW” can be used to replace or generate a
“SQL View” that already exists in the database. For all existing views,
MySQL will easily modify the view but if the view is non-existent, it will
create a new view based on the query.
The syntax below can be used to generate the “contacts view” on the basis
of the “employees” table:
"CREATE OR REPLACE VIEW contacts AS
SELECT
firstName, lastName, extension, email
FROM
employees;"
The “result set” is displayed in the picture below:

Now, assume that you would like to insert the “jobtitle” column to the
“contacts view”. You can accomplish this with the syntax below:
"CREATE OR REPLACE VIEW contacts AS
SELECT
firstName,
lastName,
extension,
email,
jobtitle
FROM
employees;"
The “result set” is displayed in the picture below:
Dropping a “SQL View”
The “DROP VIEW” statement can be utilized to delete an existing view
from the database, using the syntax below:
"DROP VIEW [IF EXISTS] [database_name].[view_name]"
The "IF EXISTS" clause is not mandatory in the statement above and is
used to determine if the view already exists in the database. It prevents you
from mistakenly removing a view that does not exists in the database.
You may, for instance, use the "DROP VIEW" statement as shown in the
syntax below to delete the "organization" view:
"DROP VIEW IF EXISTS organization;"
SQL TRANSACTIONS
A transaction can be defined as "a unit of work that is performed against a
database". They are “units or work sequences” that are performed in a
“logical order”, either manually by a user or automatically using by
the database program.
Or simply put, they are “the spread of one or more database modifications”.
For instance, if you create a row, update a row, or delete a row from the
table, that means you are executing a transaction on that table. To maintain
data integrity and address database errors, it is essential to regulate these
transactions.
Basically, to execute a transaction, you must group several SQL queries and
run them at the same time.
Properties of Transactions
The fundamental properties of a transaction can be defined using the
acronym “ACID” for the properties listed below:
Atomicity − guarantees successful completion of all
operations grouped in the work unit. Or else, at the point of
failure, the transaction will be aborted and all prior
operations will be rolled back to their original state.
Consistency − makes sure that when a transaction is properly
committed, the database states are also correctly updated.
Isolation − allows independent and transparent execution of
the transactions.
Durability − makes sure that in case of a system
malfunction, the outcome or impact of a committed
transaction continues to exist.
To explain this concept in greater detail, consider the steps below for
addition of a new sales order in the “MySQL sample database”:

Start by querying the most recent “sales order number”


from the "orders" table and utilize the next “sales order
number” as the new “order number”.
Then use the "INSERT" clause to add a new “sales order”
into the "orders" table.
Next, retrieve the “sales order number” that was inserted in
the previous step.
Now, "INSERT" the new “sales order items” into the
"orderdetails" table containing the “sales order numbers”.
At last, to verify the modifications, select data from both
"orders" and "orderdetails" tables.
Try to think, how would the sales order data be modified, if even a single
steps listed here were to fail, for whatever reason. For instance, if the step
for inserting items of an order to the "orderdetails" table failed, it will result
in an “empty sales order”.
This is where the "transaction processing" is used as a safety measure. You
can perform "MySQL transactions" to run a set of operations making
sure that the database will not be able to contain any partial operations.
When working with multiple operations concurrently, if even one of the
operation fails, a "rollback" can be triggered. If there is no failure, all
the statements will be “committed” to the database.
“MySQL Transaction” statements
MySQL offers statements listed below for controlling the transactions:

For initiating a transaction, utilize the "START


TRANSACTION" statement. The "BEGIN" or "BEGIN
WORK" statement are same as the "START
TRANSACTION".
For committing the “current transaction” and making the
modifications permanent, utilize the "COMMIT" statement.
By using the "ROLLBACK" declaration, you can simply
undo the current transaction and void its modifications.
By using the "SET auto-commit" statement, you can
deactivate or activate the auto-commit mode for the current
transaction.
By default, MySQL is designed to commit the modifications to the database
permanently. By using the statement below, you can force MySQL not to
commit the modifications by default:
"SET autocommit = 0;
Or
SET autocommit = OFF;"
To reactivate the default mode for auto-commit, you can use the syntax
below:
"SET autocommit = ON;"
Example
Let’s utilize the “orders” and “orderDetails” tables, shown in the picture
below, from the “MySQL sample database” to understand this concept
further.
“COMMIT” transaction
You must first split the SQL statement into logical parts to effectively use a
transaction and assess when the transaction needs to be committed or rolled
back.
The steps below show how to generate a new “sales order”:

1. Utilize the "START TRANSACTION" statement to begin a


transaction.
2. Select the most recent “sales order number” from the
"orders" table and utilize the subsequent “order number” as
the new “order number”.
3. “Insert" a new sales order in the "orders" table.
4. “Insert" sales order items into the "orderdetails" table.
5. Lastly, use the "COMMIT" statement to commit the
transaction.
Database Recovery Models
A recovery model can be defined as "a database property that controls how
transactions are logged, whether the transaction log requires and allows
backing up, and what kinds of restore operations are available". It is
a database configuration option that specifies the type of backup that can be
performed and enables the data to be restored or recovered in case of a
failure. The 3 types of “SQL Server recovery models” are:
SIMPLE
This model has been deemed as the easiest of all the recovery models that
exist. It offers "full, differential, and file level" backups. However,
backup of the transaction log is not supported. The log space would be
reused whenever the checkpoint operation of the SQL Server background
process takes place. The log file's inactive part is deleted and made
accessible for reuse.
There is no support for point-in-time and page restoration, only the
secondary read-only files can be restored.
Bulk Logged
The "Bulk Logged recovery model" is a unique database configuration
option, which produces similar outcome as the "FULL recovery model"
with the exception of some “bulk operations” that can be logged
only minimally. The “transaction log” file utilizes a method called as
"minimal logging for bulk operations". The caveat being that specific point
in time data can not be restored.
“SQL BACKUP DATABASE” Statement
The “BACKUP DATABASE” statement can be utilized to generate a full
back up of a database that already exists on the SQL server, as displayed in
the syntax below:
"BACKUP DATABASE databasename
TO DISK = 'filepath';"
"SQL BACKUP WITH DIFFERENTIAL" Statement
A "differential backup" is used to selectively back up the sections of the
database which were altered after the last “full backup of the database”, as
displayed in the syntax below:
"BACKUP DATABASE databasename
TO DISK = 'filepath'
WITH DIFFERENTIAL;"
Example
Consider that you have a database called “testDB” and you would like to
create a full back up of it. To accomplish this, you can use the query below:
"BACKUP DATABASE testDB
TO DISK = 'D:\backups\testDB.bak';"
Now, if you made modifications to the database after running the query
above. You can use the query below to create a back up of those
modifications:
"BACKUP DATABASE testDB
TO DISK = 'D:\backups\testDB.bak'
WITH DIFFERENTIAL;"
RESTORING Database
The Restoration can be defined as the method by which “data is copied
from a backup and logged transactions are applied to that data. There are
couple of methods to accomplish this:
T-SQL
The syntax for this method is given below:
"Restore database <Your database name> from disk = '<Backup file
location + file name>';"
For instance, the query below can be utilized to restore a database named
“TestDB” using the back up file named “TestDB_Full.bak”, located in “D:\:
"Restore database TestDB from disk = ' D:\TestDB_Full.bak' with replace;"
“SSMS (SQL SERVER Management Studio)”
First connect to the "TESTINSTANCE" database then right-click on
databases folder and click on "Restore database", as shown in the
picture below.

Now, to select the backup file as shown in the picture below, “select the
device radio button and click on the ellipse”.
Next, click Ok and the window shown below will be displayed to you.

Now, from the “top-left corner of the screen select the Files option”, as
shown in the picture below:
Lastly, select “Options” from the “top-left corner of your screen” and click
“OK” to restore the “TestDB” database, as depicted in the picture below:
Chapter 6 A Look at Queries

While we have spent a little bit of time taking a look at some of the
commands and queries that we are able to use when it comes to working in
the SQL language, it is time for us to go more in-depth about these queries
and what they are able to do for some of our needs along the way as well.
When we are working on our own business database and it is all set up the
way that we would like, it is going to be possible that at one point or
another you will want to do a search in order to make sure you are able to
find the perfect information inside of all that. This is going to make it easier
for us to find the easier information and results that we want. But we do
have to make sure that the database is set up in the right manner so that we
can use the right commands, and see that it is fast and accurate in the
process.
Think of this like when someone comes to your website, searching for that
particular product that they would like to purchase. Do you want them to
get stuck on the website that is slow, and have them see results that have
nothing to do with the item they wanted? Or would you like a fast search
that was helpful and will encourage the person to make that purchase? Of
course, for the success of your business, you are more likely to want the
second option, and we can take a look at how to set up your database and
work with queries in order to make this happen.
Chapter 7 SQL Tools and Strategies
Server management is an essential practice that helps administrators and
programmers to ensure that the database infrastructure performs optimally.
Various SQL tools are used for this purpose. These SQL tools enable
programmers and Database Administrators to perform their roles efficiently
and perfectly. It's vital to note that the database environment is ever-
changing. In this regard, the tools enable the Database Administrators and
programmers to update any changes in SQL databases. To manage the
database infrastructure, SQL database administrators and programmers
apply various tools. The objective of this topic is to examine various
methods of installing and updating SQL server tools, the various features,
and enhancements of new SSMS, kinds of SQL data tools, and the tools
which are used to develop R-code.

Improved and Enhanced Features of SSMS 18.3.1


The latest version of SSMS is an improvement of the previous versions.
Although there have been various enhancements on the SQL server
functionalities, the latest SSMS version has major improvements that
exceed the previous ones. In this connection, the following aspects of the
version have been improved:

1. Improvements On Installation-Compared to the previous


SSMS versions, the latest has a smaller download package.
Additionally, there is an improvement in the installation
procedure. In this regard, it's now possible to install SSMSS
on an individual folder; a previously impossible procedure.
Additionally, it's now possible to install the latest SSMS
version in a variety of languages that are not the same as the
operating system. The improvement done on the language
aspect is crucial as it enables professionals to work smoothly,
especially when they're using a language that is not
recognized by the operating system. For instance, you can
currently install Chinese on German Windows. Another
improvement in the installation is the ability to import
settings once from V17.x into the current version.
2. Complete Support of SQL Server - The SSMS 18 has the full
capacity to support the SQL server. The previous versions
have not been fully supporting the SQL servers, and that has
been a major limitation. The current version offers the best
Azure DQL support.
3. Support for SQL Server Management Objects –The version
supports various aspects of this connection. These include an
extension of SQL server management objects, for example,
resumable index formation. The version also offers support
for data classification 'rewrite' authorization.
4. Integration of SSMS With Azure Studio - The latest SSMS
18.3.1 integrates well with Azure Data Studio which allows
the sharing of different features found within each studio.
The sharing of different features found within each studio
makes this version robust. Because of this integration
possibility, there is much improvement in the Azure studio.
For instance, this integration has improved the migration of
services to the Microsoft cloud.
5. Other Improvements to Note - Apart from the enhancements
mentioned above, the latest SSMS version has different
improvements and support. These include support for Always
Encrypted, the formation of a presumable index,
improvements and additional features to property dialogs,
improvement of data classification which assists in
compliance with SOX and others.
6. General Updates - Under general updates, there are different
improvements. These include exposure of
AUTOGROW_ALL_FILES, and doing away with
'lightweight pooling' and 'propriety boost' options.
Additionally, the latest Firewall Rule enables a user to state
the rule rather than producing one for the user. For the first
time, there is enhanced support for many monitor systems
that allow the popping up of dialogs and Windows at specific
monitors. Migration to Azure is now easy because of the
addition of Migrate to azure option.
7. SSMS Object Scripting - In the latest version, there is an
addition of a new menu for 'CREAT OR ALTER' to enable
easy scripting of objects.
8. SSMS Showplan - There are different additions to this
feature that has improved it. Some of the features added
include actual time elapsed and logic. While logic displays
materialize Operator, the actual time elapsed makes it be
synced with the Live Query Stats Plan. Additionally,
BatchModeOnRowStoreUsed has been added to the
showplan feature. This enhances query identification.
9. SMMS Database Compatibility Level Upgrade - A new
choice has been added under <Database name>-Tasks-
>Database Upgrade. This spurs the new Query Tuning
Assistant to lead the user through various procedures,
including the collection of performance baseline before
enhancing the database compatibility level, detection of
workload regressions, and others.
10.
SSMS Query Store - A new query wait stats
report is has been added.
11.
SQL Server Integration Services - There is an
addition of a support feature to enable clients to schedule SQL
Server Integration Services (SSIS) which are found on Azure
Government Cloud. The addition of ISDeploymentWizard helps to
authenticate Azure Active Directory Password. Other SQL
integration services include Deployment wizard and a new entry
item on 'Try SSIS in Azure Data Factory'.
12.
Data Classification - The data classification task
menu is now rearranged. In this regard, additional sub-menus have
been added to the database task menu, which enables you to open
the report from the menu. Additionally, a new classification Report
menu has been introduced to data classification procedures.
13.
Vulnerability Assessment - The vulnerability
assessment task menu is now activated on the Azure Data
Warehouse. The enabled vulnerability assessment recognizes the
Azure SQL Data Warehouse. Additionally, there is an exporting
feature that has been added to vulnerability assessment. This feature
scans outcomes to excel.
14.
Always Encrypted - There is a creation of a
new Enable Always Checkbox in the Always Encrypted tab. This
offers a simple way of enabling/disabling Always Encrypted for the
database connection. Additionally, Always Encrypted with safe
enclaves has been improved with various features.
15.
Flat File Import Wizard - In this file, there has
been an addition of logic to inform the user that due to import, there
is renaming.
16.
Data-tier Application Wizard - The feature has
been enhanced to support graph tables, and Azure administered
instances. There are also additional new logins in the form of SMO
and SSMS that are helpful when linked to Azure managed instance.
17.
General Improvement - Different features of the
existing SSMS infrastructure have been improved. These include
general crashes, SSMS editor, table designer, analysis solutions,
data masking, help viewer, and others.
Apart from the above improvements, there is also the removal of specific
features on SSMS. These include:

1. The Command-Line Option-P Removed - This feature is no


longer there because of safety reasons.
2. DMF Standard Policies - These policies are no longer in use
with the latest SSMS version. In case users require them,
they can get them at GitHub.
3. Configuration Manager Tools - These include SQL server
configuration and the reporting server configuration manager.
The two are no longer part of the latest SSMS version.
4. Generate Scripts/Publish Web Service - The feature has been
deleted from this new version.
5. Deletion of Static Data Masking Feature - This feature is no
longer in the latest version of SSMS.
SQL Server Tools
In the current digital world, data is an essential aspect of any business.
Enterprises can apply data to make critical decisions. Data is also used to
get important information about clients' behavior so that you produce goods
and services that satisfy their needs. Due to its critical role, data should be
stored in a safe environment. Businesses store their data in databases. The
database ensures that information from clients is kept carefully. The
database enables you to query, sort, and manipulate the stored data within
the shortest time possible. In case you're a database administrator or IT
manager, it's essential to understand how to manage and administer
databases for timely business information.
To enable you to manage and administer databases, various tools are
applied. The database tools enable IT managers and administrators to
perform various measurements on their databases. The tools also evaluate
the applications that run on the data to ensure that they're up to date and
work efficiently.
What are Database Tools?
The term database tool is used to denote all the devices, applications,
assistants and utilities that help in the performance of various aspects of
database administration. Database tools are designed to perform specific
tasks, and therefore you require different kinds of devices.
The market for database tools is awash with various kinds of tools. It's,
therefore, crucial to consider various factors before investing in data tools.

1. Database Problem - Before you purchase any database tool,


it's important to evaluate the database problem that you have
encountered. This evaluation is crucial as it'll assist you in
determining the right tools that fix the problem.
2. Database Structure - The database structure is different in
various organizations. Therefore, before choosing your tools,
it is important to consider your database structure based on
your department or organization.
3. Functionality - There are various functions that data tools can
perform. In this regard, you must consider the function that
you require from a data tool before buying it. Examples of
functions include administering your DBMS, creation of
tables and getting information on particular metrics of
database performance. Therefore, it's crucial to select a
device that provides particular functionality.
4. Operating System - It is vital to note that particular database
tools work only on a specific operating system. Before
buying any database tool, it crucial to evaluate whether it's
compatible with your operating system. Additionally, you
also need to consider the database version. In this regard, it's
advisable to choose tools that work well across various
versions.
5. Integration - Integration here has to do with the compatibility
of the tool you purchase with the database. You need to
understand that integrating your DBMS with tools is
challenging and sometimes may require some coding. In this
regard, it's advisable to look for database tools that integrate
well with your database.
6. Vendor Specific - Different DBMS vendors offer specific
DBMS tools. Therefore, it's essential to consider buying tools
from specific DBMS vendors to simplify the integration
procedure.
7. Separate Installation - There some database tools that require
a separate installation for different DBMS. Other tools, on
the other hand, can be installed in a single act across various
DBMS. You need to choose the latter because they're not
time-consuming.
Types of SQL Tools
There are different kinds of SQL tools in the market. When choosing these
tools, you need to consider various factors. These include:

1. What You Want to Manage - For instance, in case you want


to administer SQL server instances, it's advisable to purchase
Azure Data Studio.
2. Creation and Maintenance of Database Code - You need to
select SQL Server Data Tools (SSDT).
3. Querying SQL Server with a Command-Line tool - you can
choose MySQL-cli.
4. Writing T-SQL Scripts- You should apply visual studio code
and mssql extension.
This section aims to appraise the commonly used tools in SQL server
management as follows:
Interbase
This is a robust tool that is fast and embedded with the SQL database. It
comes with business-grade safety, disaster recovery, and the ability to
smoothly handle change. The tools have the following features:
Adheres to SQL standards - The tools are compatible with all SQL
standards, recognizes Unicode, and are suitable for any global character.
Provides multiple Unicode, live alerts, and change view tracking.

Lightweight - it is suitable for the current CPU and may be


used in a variety of systems.
Quick recovery - it is suitable for rapid disaster recovery.
dbForge Studio for SQL server
This is one of the best IDE tools which performs a variety of functions
including server management, administration, data reporting, analysis, and
others. The tool has the following features:

Able database management


Has high-quality coding support
High-quality SQL server reporting
protection-it provides the best information protection
dbWatch
This tool offers database administration fully support and is used for
various databases including Oracle, SQL Server, Azure, and others. The
tool is used for maintenance purposes in different environments, including
hybrid and cloud. It has the following characteristics:

Assists in monitoring performance and generation of


important reports.
Enables routine memory minimization for the SQL server.
Assists in DB administration with a short message and
electronic mail additions
support-it provides various support services, including
multisite and data cluster assistance.
Bulk install and bulk alerts-they are used when you're
handling the bulk installation and alert services.
SQL Sentry
This is one of the best tools that offer SQL database monitoring services. It
can handle ever-growing business workloads. Due to the increasing need
for high-performance data, many companies are using this tool. The SQL
sentry has the following features:

AssistsAssists in the collection and presentation of


performance metrics.
Analysis and tuning capabilities-the tool has a functionality
that enables SQL query analysis and high tuning capabilities.
It is highly rated as the best DBA solution.
Adminer
This is a data management tool that provides various solutions. These
include the management of databases, tables, columns, relations, and others.
The tool is sold as a one PHP file and supports various databases on
different systems including MS SQL, Oracle, and MYSQL. You can
download various versions as files. Adminer has various qualities,
including:

Performs a listing of data tables with various functions


including sort, search, and others.
It has a variety of customization options.
It indicates procedures and destroys them.
DBComparer
This is a database comparison device that analyses and offers insights on
database structures that are easy to use. The tools enable you to compare
different objects, including tables, foreign keys roles, and others.
DBComparer has different features like:

Routinely contrast various database structures.


It provides different choices for evaluation.
It's an in-build visual tree for the spontaneous representation
of differences.
EMS SQL Manager Lite for SQL Server
This tool is used for creating SQL database objects and allowing various
editing tasks to be done. The tool has the best user-interface and loaded
with useful functionalities. It can replace SSMS.
Support features - the tool supports a variety of SQL features including
Azure SQL database, Unicode, and others.

Provides tools for query building - the tool provides both


visual and text devices for query development.
Comparison - the database tool helps in comparing and
synchronizing various database structures.
Loaded with SQL-debugger - the tools come with an SQL-
debugger that tracks processes, functions, and scripts.
Firebird
This is an efficient SQL tool that works well with Windows and Linux.
Firebird has various features, including:

Complete support for processes - the tool has full support for
various procedures including standard transactions, many
access ways, support for incremental backups and others.
Application of complex technology - the tool applies the
latest technology like FB25 and FB30. This means that it
offers high-quality services.
Support of cloud - The tool recognizes cloud infrastructure.
Squirrel SQL
This tool supports the management of Java databases. Through this tool,
you can view the SQL database and provide commands. The tool works on
different databases, including IBM, SQL, Oracle, and others. Its key
features include:

Popup menu - through this menu, you can perform various


functions such as editing.
Display of object tree - this shows you the season window.
Charts - it displays charts of tables and how they are
connected.
SQLite Database Browser
This tool is sourced openly and helps in creating and editing various files in
the database. SQLite Database Browser has various features including:

Creation and modification of database - through this tool,


you can create and edit various databases, tables, and others.
SQL commands - the tool has a log that displays all SQL
commands that have been issued and how they've been used,
Creates simple graphs – the tool is able to plot a graph from
the data on the table.
DBeaver
This tool is openly sourced and used by database administrators and
managers to support various databases like MySQL and IBM. The key role
of the tool is to create and modify databases, SQL scripts, the exportation of
data and others. The tool has various features, including:

creation of data-ability to create and modify data


plugins-the tool is offered with various plugins
DB Visualizer
This is an open-source universal database tool that helps in the management
of various databases such as Oracle, Sybase, Informix, and others. The tool
has a browser that navigates through database objects. It helps in the
formation and modification of database objects. The core features of DB
Visualizer include:

Management of particular database objects


Forms and modifies various processes and functions
It provides schema assistance

Working with the Queries


When you do set up the query that you would like to use, you will find that
you are basically sending out an inquiry to the database that you already set
up. You will find that there are a few methods to do this, but the SELECT
command is going to be one of the best options to make this happen, and
can instantly bring back the information that we need from there, based on
our search.
For example, if you are working with a table that is going to hold onto all of
the products that you offer for sale, then you would be able to use the
command of SELECT in order to find the best selling products or ones that
will meet another criterion that you have at that time. The request is going
to be good on any of the information on the product that is stored in the
database, and you will see that this is done pretty normally when we are
talking about work in a relational database.
Working with the SELECT Command
Any time that you have a plan to go through and query your database, you
will find that the command of SELECT is going to be the best option to
make this happen. This command is important because it is going to be in
charge of starting and then executing the queries that you would like to send
out. in many cases, you will have to add something to the statement as just
sending out SELECT is not going to help us to get some of the results that
you want. You can choose the product that you would like to find along
with the command, or even work with some of the features that show up as
well.
Whenever you work with the SELECT command on one of your databases
inside of the SQL language, you will find that there are four main keywords
that we are able to focus on. These are going to be known as the four
classes that we need to have present in order to make sure that we are able
to complete the command that we want and see some good results. These
four commands are going to include:
SELECT—this command will be combined with the FROM
command in order to obtain the necessary data in a format
that is readable and organized. You will use this to help
determine the data that is going to show up. The SELECT
clause is going to introduce the columns that you would like
to see out of the search results and then you can use the
FROM in order to find the exact point that you need.

FROM—the SELECT and the FROM commands often go


together. It is mandatory because it takes your search from
everything in the database, down to just the things that you
would like. You will need to have at least one FROM clause
for this to work. A good syntax that would use both the
SELECT and the FROM properly includes:
SELEC [ * | ALL | DISTINCT COLUMN1,
COLUMN2 ]
FROM TABLE1 [ , TABLE2];
WHERE—this is what you will use when there are multiple
conditions within the clause. For example, it is the element in
the query that will display the selective data after the user
puts in the information that they want to find. If you are
using this feature, the right conditions to have along with it
are the AND and OR operators. The syntax that you should
use for the WHERE command includes:
SELEC [ * | ALL | DISTINCT COLUMN1,
COLUMN2 ]
FROM TABLE1 [ , TABLE2];
WHERE [ CONDITION1 | EXPRESSION 1
]
[ AND CONDITION2 | EXPRESSION 2 ]
ORDER BY—you are able to use this clause in order to
arrange the output of your query. The server will be able to
decide the order and the format that the different information
comes up for the user after they do their basic query. The
default for this query is going to be organizing the output
going from A to Z, but you can make changes that you would
like. The syntax that you can use for this will be the same as
the one above, but add in the following line at the end:
ORDER BY COLUMN 1 | INTEGER [
ASC/DESC ]
You will quickly see that all of these are helpful and you can easily use
them instead of the SELECT command if you would like. They can
sometimes pull out the information that you need from the database you are
working in a more efficient manner than you will see with just the SELECT
command. But there are going to be many times when you will find that the
SELECT command will be plenty to help you get things done when it is
time to search your database as well.
A Look at Case Sensitivity
Unlike some of the other coding languages that are out there and that you
may be tempted to use on your database searches, you may find that the
case sensitivity in SQL is not going to be as important as it is in some of
those other ones. You are able to use uppercase or lowercase words as you
would like, and you can use either typing of the word and still get the part
that you need out of the database. It is even possible for us to go through
and enter in some clauses and statements in uppercase or lowercase,
without having to worry too much about how these commands are going to
work for our needs.
However, there are a few exceptions to this, which means there are going to
be times when we need to worry about the case sensitivity that is going to
show up in this language a bit more than we may want to. One of the main
times for this is when we are looking at the data objects. For the most part,
the data that you are storing should be done with uppercase letters. This is
going to be helpful because it ensures that there is some consistency in the
work that you are doing and can make it easier for us to get the results that
we want.
For example, you could run into some issues down the road if one of the
users is going through the database and typing in JOHN, but then the next
person is typing in John, and then the third person is going through and
typing in john to get the results. If you make sure that there is some
consistency present, you will find that it is easier for all of the users to get
the information that they want, and then you can make sure that you are
able to provide the relevant information back when it is all done.
In this case, working with letters in uppercase is often one of the easiest
ways to work with this because it is going to make it easier and the user is
going to see that this is the norm in order options as well. If you choose to
not go with uppercase in this, then you should try to find some other
method that is going to keep the consistency that you are looking for during
the whole thing. This allows the user a chance to figure out what you are
doing, and will help them to find what they need with what is inside of their
queries.
As you go through this, you may notice that these queries and transactions
that you are able to create for the database that you are working on are
going to be important to the whole system and ensuring that it is actually
going to work in the manner that you would like. In the beginning, this may
feel like a lot of busywork to keep up with and that it is not worth your time
or energy to do or that it is not that big of a deal. We may assume that with
a good query from the user, they will be able to find all of the information
that they need in no time.
Chapter 8 Exercises, Projects And Applications

SQL in full is the Structured Query Language and is a kind of ANSI


computer language that has been specially designed to access, manipulate,
and update database systems. SQL has different uses; the most significant
of them is managing data in database systems that can store data in table
forms. Additionally, SQL statements have been used regularly in updating
and retrieving data from databases.
What we have thought since childhood is the best to learn a new concept is
by practicing it. It is no exception; you have to do various equations related
to SQL as a way of learning about them. This section seeks to provide you
with a list of exercises or equations that when you solve them, you will be
in a position to resolve your issues related to this topic. Check them out!
Examples of Exercises in SQL
The following are random exercises that you can do in SQL. Assuming you
are an employee of a particular research company and you have the task of
finding out data about customers in a specific business establishment.
Below are some of the queries you are more likely to encounter.
- You are instructed to construct a query that can display every customer
that has spent over 100 dollars in a hotel. This exercise will help you
acquire mathematical skills in SQL.
- Draft down queries that will indicate every customer that resides in New
York City and has to spend over two hundred dollars in the business
establishment.
- Draft down questions that will show every customer that either live in
New York City OR has spent over two hundred dollars in the business
establishment.
- Draft down queries that will indicate every customer that either resides in
New York City or HAS NOT to spend over two hundred dollars in the
business establishment.
- Draft down queries that will indicate every customer that DOES NOT
reside in New York City and HAS NOT to spend over two hundred dollars
in the business establishment.
- Draft down queries that will indicate either the orders NOT issued by the
last day of the week by a salesman whose identification number is 505 and
below and the rules that the amount spent on them is one thousand dollars
and below.
- Write down a SQL equation or statement that displays the salesman's
identification number, his name, and the city that he comes from. Also,
ensure that the commission involved has a range of more than 0.20%, but
below 0.15% captured.
-Draft a SQL query that shows every order that customers spend no more
than two hundred dollars on, don't include those that were made passed the
tenth of February, and purchased by customers that have identification
numbers below 5009.
- Pen down SQL statements to isolate the rows that fulfill the orders made
before first August, and the amount spent on them is below one thousand
dollars. Secondly, let it include the customer identification numbers that are
more than one thousand.
- Draft a list of SQL queries that will display the order number, the amount
of money spent, the number of targets achieved, and those that have not
been made. Also, indicate the name of the salesman working on them and
their success rates.
Projects in SQL Programming
Below are random projects that you can encounter in SQL programming.
They have been picked and randomly, and I believe if you practice them
out, you will be better equipped in handling such situations. Before we list
the projects, we first should get to understand the differences between
exercises and projects. Well, logically, a task is more of a quick test you can
do without a lot of complications: it is less complicated compared to
projects. On the other hand, projects are a little more complicated and
sophisticated. It requires advanced skills as well as data research to do it.
Having learned that, let's discuss the potential projects you can encounter in
SQL programming
i) Interviews
When making software for a particular company, you will need knowledge
of an existing system that belongs to that company. For such purposes, you
will have to carry out interviews for some individuals working in that
company and collect critical information. You will have to do interviews on
people that are aware of that software. Such people could be working as
hostel wardens or trainers.
ii) Discussions (Groups)
They can be a kind of group discussion that has occurred between
employees of the company you are working on. For a start, a good number
of ideas might appear clustered together or filled by concepts that already
exist. Such ideologies might be brought on board by programmers.
Additionally, it can be done through online observation. It is a procedure of
obtaining more essential details about the existing software or web apps
from the web. The primary purpose of this project is getting as close as
possible to the system. SQL programming plays a critical role in ensuring
the systems are up and running as recommended.
Applications of SQL
The self-variable option lets you carry out the joining process on the same
table, saving you the time you spend organizing the final table. There are,
however, a few situations where this can be a good option. Imagine the
chart you created earlier has the columns consisting of country and
continent.
When faced with a task of listing countries located on the same continent, a
clearly outlined set below should give you a glimpse of the results expected.
SQL variable can further be subdivided into three different types: the left
join, the right join as well as the full outer join. The outer join primary role
is returning all the identified rows from a single table, and once the joining
target is archived, it includes the columns from another table. Outer joins
are different from inner joins in the essence that an inner join cannot
involve the unmatched rows in the final set of results.
When using an order entry, for instance, you may be faced with situations
where it is inevitable to list every employee regardless of the location, they
put customer orders. In such a case, this kind of joins is beneficial. When
you opt to use this kind of join, all employees, including those that have
been given marching orders, will be included in the final result.
This is a kind of outer join that is responsible for returning each row from
the first left side of the table and those row that match from the right side of
the table. In case there are no matches on the right side, left join returns a
null value for each of those columns. A type of outer join has the task of
returning each row from the right side of the table and merging with the
other side (left) of the table. Again, if there aren't any values for those digits
in the column, the join returns null values for them.
It has the task of returning rows from their initial location in the inner join,
and in case there is no match found, this join returns null results for those
tables.
This is a kind of variable that is essentially a product of Cartesian elements
that have been expressed in the SQL set up. Picture this; you require a
whole set of combinations available between both tables and even in just
one meal. You will have to use the cross join to achieve that technique. To
help you understand this join better, you can go back to the two tables we
created at the beginning of the article. Look at both the columns and try to
compare the impact each one of them has to the final result. The cross join
plays an essential in ensuring accuracy during merging. You ought to note
that there are apparent differences between cross joins and outer joins
despite the fact that the description makes them almost look similar. We
hope to discuss that in this chapter as well.
Similarly, MySQL system has a slot that allows you to announce more than
one set of variables that has a common type of data. Again, most of the
technicians have had issues with how this command relays information to
related databases. In the various methods of storing variances and variables,
this one has proven to be more secure than others. Consequently, it has been
known to be the most popular of them all.
Variables can be applied in mathematical expressions, for example, adding
values altogether or combining and holding texts. This can be used as a
section of the general information. For your information, variables are also
applied in storing information so as one can participate in a kind of
calculations. Additionally, variables can be part of the parameters and are
used in procedural assessments. This is two in one method that not only lets
you declare a variable but also setting it up with values that have a similar
data type. Going back to the examples we gave earlier, we can affirm that
varchar is a kind of data that lets you sustain more than one kind of
character in just a single string.
Up to this point, you should be in a position to understand the kind of SQL
Exercises and Programs as well as the various types in existence. This will
not only let you be in an excellent place to tackle errors in case they occur
and prevent them from happening as well. When Mark Suaman, a renown
data scientist and a graduate of Havard University, first used varchar, he
recommended it for being efficient and accurate. He rated it among the best
types of data set in the market today. It does not have an allocation for
potential occurrences of errors. It is hard to interfere with such a highly
secure kind of data type.
Since its introduction in the computing world, SQL has played a significant
role in revolutionizing data storage in a systematic manner as well as direct
retrieval. As the digital world continues to grow steadily, the amount of data
stored quickly escalates, making organizations and personal data piling up.
Therefore, SQL acts as a platform where these data are stored while
offering direct access without the emergence of limitations. As such, SQL is
used in different sectors, including telecommunication, finance, trade,
manufacturing, institutional, and transport. Its presence primarily deals with
data but also accompanies other significant benefits on its application.
Data Integration
Across the sectors mentioned above, one of its main applications of SQL is
the creation of data integration scripts commonly done by administrators
and developers. SQL databases comprise of several tables which may
contain different data. When these data are integrated, it creates a new
experience essential for the provision of meaningful information, therefore,
increasing productivity. Data integration scripts are crucial in any given
organization, including the government, as it offers trusted data which can
be utilized to promote the achievement of the set goals.
Analytical Queries
Data analysts regularly utilize Structured Query Language to smoothen
their operations more so when establishing and executing queries. When
these data are combined, it brings out more comprehensive information
critical for any individual or organization. The same is also applicable for
data analysts as they use a similar aspect. As they use an analytical query
structure, queries, and tables from SQL are fed into the structure to deliver
crucial results from varying sources. In this case, data analysts can readily
acquire different queries and customize them to have a more comprehensive
data to depend on as solutions.
Data Retrieval
This is another important application of SQL to retrieve data from different
subsets within a database with big data. This is essential in financial sectors
and analytics as to the use of numerical values typically consists of mixed
statistical data. The most commonly used SQL elements are create, select,
delete, and update, among others. The technique is made possible when the
user quickly can search the database and acquire the needed data as SQL
sieves the information to bring out the desired data. In some cases, the
language may deliver similar or related data when the required data is
missing. This is crucial as one can compare the results as well as make
changes where the need arises.
Chapter 9 Common Rookie Mistakes
Achieving an error-free implementation or design is considered to be one of
the ultimate goals in handling any programming language. A database user
can commit errors by simply performing inappropriate naming conventions,
writing improperly the programming syntax (typographical errors like a
missing apostrophe/parenthesis) or even when the data value entered does
not correspond to the data type being defined.
To simplify things, SQL has SQL has created a way to return error
messages so that users or programmers will be aware of what is happening
in the database system. This will further lead to taking corrective measures
to improve the situation. Some of the common error-handling error-
handling features are the WHENEVER clause and the SQLSTATE status
parameter.
SQLSTATE
The host variable or status parameter SQLSTATE is one of the SQL error-
handling tools that includes a wide selection of anomalous programming
conditions. It is a five-character string that consists of uppercase letters
from A to Z and numeral values from 0 to 9. The first two characters refer
to the class code, while the next three signify the subclass code. The
indicated class code is responsible for identifying the status after an SQL
statement has been completed – whether it is successful or not. If the
execution of the SQL statement is not successful, then one of the major
types of error conditions will be returned. Additional information about the
execution of the SQL statement is also indicated in the subclass code.
The SQLSTATE is always updated after every operation. If its value is set
to ‘00000’, this means that the execution was successful, and you can
proceed to the succeeding operation. If it contains a string other than the
five zeroes, then the user has to check his programming codes to correct the
error committed. There are multiple ways on how to handle a certain SQL
error, which normally depends on the on the class and subclass codes
indicated by the SQLSTATE.
WHENEVER Clause
Another error-handling mechanism tool, the WHENEVER clause focuses
on execution exceptions. Through this, an error is acknowledged and
provides the programmer an option to rectify it. This is a lot better instead
of not doing anything if an error occurs. If you cannot correct or reverse the
error that was committed, then the application program can just be
gracefully terminated.
The WHENEVER clause should be written before the executable part of the
SQL code, in the declaration section to be exact. The standard syntax for
the said clause is:
WHENEVER CONDITION ACTION;
CONDITION – the value can either be set to ‘SQLERROR’ (will return
TRUE if the class code value is not equivalent to 00, 01 or 02) or ‘NOT
FOUND’ (will return TRUE if the SQLSTATE value is equivalent to
02000)
ACTION – the value can either be set to ‘CONTINUE’ (program execution
is continued as per normal) or ‘GOTO address’ (a designated program
address is executed)

DATEADD ( datepart, number, date)


This function returns to us the datetime value, which is obtained by adding
to the date the number of intervals that are going to be found in the
datepart and it will be equal to the number we are looking at. For
example, it is possible for us to go through and add in any number of
hours, days, minutes, years and so on to the date that works the best for
us. Valid values for the datepart argument are given below and are taken
from BOL.
DATEDIFF Function
Syntax
DATEDIFF (datepart, startdate, enddate)
The interval that we work with here is something that we can measure out
in different units. The options that we choose will be determined thanks
to the datapart argument that we listed out above for the function of
DATEADD.
One thing that we need to note here is that the time we are using for
departure, which is the function of time_out, and the arrival time, which is
going to be used with the function of time_in, will be stored in our Trip
table under the type of datetime. This is a bit different if you are using any
version that was before 2000 in SQL because these versions did not have
the temporal types of data to get things done. For us, that means that only
when we take the time and insert it into our field of datetime, then the time
is going to be supplemented back to the date value that is the default.
Firstly, for flights that depart on one day and arrive on the next, the value
calculated in this way will be incorrect. The second thing to consider here
is that it is never reliable for us to make assumptions about the day we are
on, which is going to be the present only due to the fact that it has to
conform in some manner to our datetime type.
But how to determine that the plane landed the next day? This helps the
description of the subject area, which says that the flight cannot last more
than a day
But back to our example. Assuming that the departure/arrival time is a
multiple of a minute, we can define it as the sum of hours and
minutes. Since the date/time functions work with integer values, we
reduce the result to the smallest interval - minutes. So, the departure time
of flight 1123 in minutes:
SELECT DATEPART(hh, time_out)*60 + DATEPART(mi, time_out)
FROM trip WHERE trip_no=1123
From there we are going to work on the arrival time and how we are able to
use this for our needs. Some of the coding that we need to work with to
show this will be below:
SELECT DATEPART(hh, time_in)*60 + DATEPART(mi, time_in) FROM
trip WHERE trip_no=1123
When we get to this point, we need to be able to take a look at some of our
times. We need to know whether the arrival time is going to be able to
exceed the time of departure. If this does show to be true, then we need to
subtract the second number that we have from the first so that we know
how long the flight is. Otherwise, we will need to add one day to the
difference to get this done.
Here, in order not to repeat long constructions in the CASE statement, a
subquery is used. Of course, the result turned out to be rather cumbersome,
but absolutely correct in the light of the comments made to this problem.
Example. Determine the date and time of departure of flight 1123.
The table of completed flights Pass_in_trip contains only the date of the
flight, but not the time because, in accordance with the subject area, each
flight can be operated just one time for the day. To help us figure out this
kind of problem, we need to be able to add up some of the time that is in
our table of Trip to the data that we stored in there as well.
Take some time to type in the code that we just did above, and see what the
output should be. This should tell us a bit more about the time and so on
within our output. If you type part of this in the wrong manner, you are
going to end up with an error message so check that you write it all out the
proper manner for the best results here.
DISTINCT is necessary here to exclude possible duplicates since the flight
number and date are duplicated in this table that we are using for all of the
passengers of the flight we are looking at.
Datename Function
Syntax
DATENAME (datepart, date)
This function returns the symbolic representation of the component, which
is going to be known as the datepart, which is for our date that we
specified before. The argument is going to spend some time specifying
for us the date component and will tell us that the data is only able to take
on one, and no more, of the values that are found on the table. This is
going to give us a lot of chances to work with the process of
concatenating the components of our data and getting it in the right
format in order to present it to ourselves or to others.
It should be noted that this function detects difference values of the
dayofyear or the argument tfor day when it comes to the datapart. The
first of these is going to be more symbolic in the kind of representation
that it is going to provide to the day when we go off the beginning of the
year.
In some situations, we are going to notice that the function of DATEPART
is going to be better served when we replace it with a function that is
easier. This will often depend on what we are trying to do within some of
the code we use.
All of these parts are going to be important in how we are able to handle
some of the codes that we need along the way. Make sure that these are
found in these types of codes to get the results you are looking for.
Chapter 10 Tables
Your tables are used to store the data or information in your database.
Specific names are assigned to the tables to identify them properly and to
facilitate their manipulation. The rows of the tables contain the information
for the columns.
Create tables
The following are the simple steps:
Step #1– Enter the keywords CREATE TABLE
These keywords will express your intention and direct what action you have
in mind.
Example: CREATE TABLE
Step #2–Enter the table name
Right after your CREATE TABLE keywords, add the table name. The table
name should be specific and unique to allow easy and quick access later
on.
Example: CREATE TABLE “table_name”
The name of your table must not be easy to guess by anyone. You can do
this by including your initials and your birthdate. If your name is Henry
Sheldon, and your birthdate is October 20, 1964, you can add that
information to the name of your table.
Let’s say you want your table to be about the traffic sources in your
website, you can name the table“traffic_hs2064”
Take note that all SQL statements must end with a semicolon (;). All the
data variables must be enclosed with quotation marks (“ “), as well.
Example: CREATE TABLE traffic_hs2064
Step #3– Add an open parenthesis in the next line
The parenthesis will indicate the introduction of the columns you want to
create.
Example: CREATE TABLE “table_name”
(
Let’s apply this step to our specific example.
Example: CREATE TABLE traffic_hs206 4
(
In some instances, the parentheses are not used.
Step #4–Add the first column name
What do you want to name your first column? This should be related to the
data or information you want to collect for your table. Always separate your
column definitions with a comma.
Example: CREATE TABLE “table_name”
(“column_name” “data type”,
In our example, the focus of the table is on the traffic sources of your
website. Hence, you can name the first column“country”.
Example: CREATE TABLE traffic_hs2064
(country
Step #4 – Add more columns based on your data
You can add more columns if you need more data about your table. It’s up
to you. So, if you want to add four more columns, this is how your SQL
statement would appear.
Example: CREATE TABLE “table_name”
(“column_name1” “data type”,
“column_name2” “data type”,
“column_name3” “data type”,
“column_name4” “data type”);
Let’s say you have decided to add for column 2 the keyword used in
searching for your website, for column 3, the number of minutes that the
visitor had spent on your website, and for column 4, the particular post that
the person visited. This is how your SQL statement would appear.
Take note:

The name of the table or column must start with a letter, then
it can be followed by a number, an underscore, or another
letter. It’s preferable that the number of the characters does
not exceed 30.
You can also use a VARCHAR (variable-length character)
data type to help create the column.
Common data types are:
date – date specified or value
number (size) – you should specify the maximum
number of column digits inside the open and close
parentheses
char (size) – you should specify the size of the fixed
length inside the open and close parentheses.
varchar (size) – you should specify the maximum size
inside the open and close parentheses. This is for
variable lengths of the entries.
Number (size, d) – This is similar to number (size),
except that ‘d’ represents the maximum number of
digits (from the decimal point) to the right of the
number.
Hence if you want your column to show 10.21, your date type would be:
number (2,2)
Example: CREATE TABLE traffic_hs2064
(country varchar (40),
keywords varchar (30),
time number (3),
post varchar (40) );
Step #5 – Add CONSTRAINTS, if any
CONSTRAINTS are rules that are applied for a particular column. You can
add CONSTRAINTS, if you wish. The most common CONSTRAINTS are:

“NOT NULL” – this indicates that the columns should not


contain blanks
“UNIQUE” – this indicates that all entries added must be
unique and not similar to any item on that particular column.
In summary, creating a table using a SQL statement will start with the
CREATE TABLE, then the “table name”, then an open parenthesis, then the
“column names”, the “data type”, (add a comma after every column), then
add any “CONSTRAINTS”.
Deleting Tables
Deleting tables, rows or columns from your database is easy by using
appropriate SQL statements. This is one of the commands that you must
know to be able to optimize your introductory lessons to SQL.
Here are steps in deleting tables:
Step #1– Select the DELETE command
On your monitor, choose the DELETE command and press the key.
Downloading Window’s MySQL Database, MySQL Connectors and
MySQL Workbench can facilitate your process.
Expert SQL users may laugh and say that these steps should not be included
in this book. But for beginners, it is crucial to state specifically what steps
should be done. Imagine yourself learning a totally new language; Russian
for example,and you’ll know what I mean.
Step #2– Indicate from what table
You can do this by adding the word“FROM”and the name of the table
DELETE FROM ‘table_name”

Step #3–Indicate the specific column or row by adding“where”


If you don’t indicate the“where” all your files would be deleted, so ensure
that your statement is complete.
Example: DELETE FROM ‘table_name”
WHERE “column_name”
Hence, if you want to delete the entire table, simply choose:
DELETE FROM “table_name”;
where time = (10)
DELETE from traffic_hs2064
where time = (5);
Step #4–Complete your DELETE statement by adding the necessary
variables
Example: DELETE FROM “table_name”
WHERE “column_name”
OPERATOR “value ”
[AND/OR “column”
OPERATOR “value”];
Deleting the wrong tables from your database can cause problems, so,
ascertain that you have entered the correct SQL statements.
Inserting Data into a Table
You can insert a new data into an existing table through the following steps.
Step #1–Enter the key words INSERT INTO
Select the key words INSERT INTO. The most common program, which is
compatible with SQL is windows MySQL. You can use this to insert data
into your table.
Step #2 - Add the table name
Next, you can now add the table name. Be sure it is the correct table
Example: INSERT INTO“table_name”
Using our own table:
Example: INSERT INTO traffic_hs2064
Step #3–Add Open parenthesis
You can now add your open parenthesis after the table name and before the
column_names. Remember to add commas after each column.
Example: INSERT INTO“table_name”
(
Using our own table:
Example: INSERT INTO traffic_hs2064
(
Step #4–Indicate the column
Indicate the column where you intend to insert your data.
Example: INSERT INTO“table_name”
(“column_name”,. . . “column_name”
Step #5– Close the columns with a close parenthesis
Don’t forget to add your closing parenthesis. This will indicate that you
have identified the columns accordingly.
Example: INSERT INTO“table_name”
(“first_columnname”, . . .“last_columnname”)
Step #6–Add the key word values
The key word values will help your selection be more specific. This is
followed by the list of values. These values must be enclosed in parentheses
too.
Example: INSERT INTO“table_name”
(“first_columnname”, . . .“last_columnname”)
values (first_value, . . . last_value
Step #7– Add the closing parenthesis
Remember to add the close parenthesis to your SQL statement. This will
indicate that the column does not go no further.
Example: INSERT INTO“table_name”
(“first_columnname”, . . .“last_columnname”)
values (first_value, . . . last_value)
Step #8–Add your semicolon
All SQL statements end up with a semicolon, with the exception of a few.
Example: INSERT INTO“table_name”
(“first_columnname”, . . .“last_columnname”)
values (first_value, . . . last_value);
Take note that strings must be enclosed in single quotation marks, while
numbers are not.
Using our sample table, you can come up with this SQL statement:
Example: INSERT INTO“traffic_hs2064”
(country, keyword. time)
values (‘America’,‘marketing’, 10);
You can insert more data safely without affecting the other tables. Just make
sure you’re using the correct SQL commands or statements.
Dropping a Table
You can drop or delete a table with a few strokes on your keyboard. But
before you decide to drop or delete a table, think about the extra time you
may spend restoring it back, if you happen to need it later on. So, be careful
with this command.
Dropping a table
Dropping a table is different from deleting the records/data in the table.
When you drop a table, you are deleting the table definition plus the
records/data in the table.
Example: DROP TABLE “table_name”
Using our table, the SQL statement would read like this.
Example: DROP TABLE traffic_hs2064;
DROPPING your table is easy as long as you are able to create the proper
SQL.
Using the ALTER TABLE Query
There will be several times you need to use the ALTER TABLE command.
This is when you need to edit, delete or modify tables and constraints.
The basic SQL statement for this query is:
Example: ALTER TABLE “table_name”
ADD “column_name” data type;
You can use this base table as your demo table:
Traffic_hs2064
Country Searchword Time Post
America perfect 5 Matchmaker
Italy partner 2 NatureTripping
Sweden mate 10 Fiction
Spain couple 3 News
Malaysia team 6 Health
Philippines island 5 Entertainment
Africa lover 4 Opinion

If your base table is the table above, and you want to add another column
labeled City, you can create your SQL query this way:
Examples: ALTER TABLE Traffic_hs2064
ADD City char(30);

The output table would appear this way:


Traffic_hs2064

Country Searchword Time Post City


America perfect 5 Matchmaker NULL
Italy partner 2 NatureTripping NULL
Sweden mate 10 Fiction NULL
Spain couple 3 News NULL
Malaysia team 6 Health NULL
Philippines island 5 Entertainment NULL
Africa lover 4 Opinion NULL

You can also ALTER a table to ADD a constraint such as, NOT NULL.
Example: ALTER TABLE Traffic_hs2064
MODIFY City datatype NOT NULL;
This will modify all entries that are NOT NULL.
You can also ALTER TABLE to DROP COLUMNS such as, the example
below:
Example: ALTER TABLE Traffic_hs2064 DROP COLUMN Time;

Using the second table with this SQL query, the resulting table will be this:
Traffic_hs2064
Country Searchword Post City
America perfect Matchmaker NULL
Italy partner NatureTripping NULL
Sweden mate Fiction NULL
Spain couple News NULL
Malaysia team Health NULL
Philippines island Entertainment NULL
Africa lover Opinion NULL

You can ALTER TABLE by adding a UNIQUE CONSTRAINT. You can


construct your SQL query this way:
Example: ALTER TABLE Traffic_hs2064
ADD CONSTRAINT uc_Country UNIQUE (Country, SearchWord);
In addition to these uses, the ALTER TABLE can also be used with the
DROP CONSTRAINT like the example below.
Example: ALTER TABLE Traffic_hs2064
DROP CONSTRAINT uc_City;
Here are examples of CONSTRAINTS.

NOT NULL
This constraint indicates that the NOT NULL values should not be present
in the columns of a stored table.
CHECK
This will ensure that all parameters have values that have met the criteria.

UNIQUE
This ascertains that all values in the columns are distinct or unique.

PRIMARY KEY
This indicates that the values in two or more columns are NOT NULL and
simultaneously UNIQUE.

FOREIGN KEY
This will ascertain that the values of columns from different tables match.

DEFAULT
There is a specified DEFAULT value for columns. This may appear as
blanks or appear as NULL.
Make sure you use these constraints properly to make the most out of your
SQL queries.
Chapter 11 The Database
A database is an assortment of data that is sorted out, so it tends to be
effectively gotten to, oversaw, and refreshed. PC databases commonly
contain accumulations of information records or documents containing data
about deals exchanges or associations with explicit clients.
In a relational database, computerized data about a particular client is
composed of lines, sections, and some of the tables that we decide to file to
make it easier to find the data that is the most pertinent to our work.
Something that is interesting here is that the graph database is going to
work with edges and hubs to help us make those connections and
characterize them between information sections and the inquiries, and this
is going to require that we work with semantic inquiry punctuation.
There are also a few options that are going to offer us Corrosive
consistency, which include strength, detachment, consistency, and
atomicity, to help ensure that the information we see is reliable and that
some of the exchanges we work with are finished.
Different Types of Databases
There have been a lot of advancements when it comes to databases through
the years. There are different levels and system databases, and today we get
to work with ones that are more object-oriented and ones based on the cloud
or SQL.
In one view, it is possible that the databases are going to be grouped by the
kind of content that they hold onto, which can make it easier for us to work
with them and find the one that we need. In figuring, databases are, in some
cases, arranged by their hierarchical methodology.
One thing that we are going to notice when we work with this part is that
there will be many different types of databases that we are able to work
with, starting with the relational database that we work with, all the way to
a distributed database, the cloud database, the NoSQL database, and the
graph database as well. Let’s take a look at how each one is going to work.
First is the relational database. This is going to be one that was designed in
1970, and it is considered one of the best options to work with for many
businesses. It will hold onto lots of tables that will place the information
you want into some of the predefined classes. And each of these tables in
the database is going to have one information classification in a segment,
and each line is going to have the right segments as well.
This kind of database is going to rely on the SQL language that we have
been talking about so far. This is going to be one of the standard client and
application program interfaces for this kind of database. And because of the
features and more that we are going to find with some of the relational
databases, you will find that it is easy to create and work with and that it
will be able to handle a lot of the great parts that you want in the process.
In addition to working with a relational database, we are able to work with
the distributed database, which is a little bit different. This is going to be a
type of database where the segments are going to be put into physical areas,
and then it will be prepared to be repeated or scattering among various
focuses on the system.
You can choose whether to make this database type heterogeneous or
homogenous. All of the physical areas that are found in one of these that is
more homogenous will have the equivalent basic equipment and will run
the right frameworks that you need in order to handle the database
application.
Another thing to consider here is that the equipment and the database
applications that are in one of the heterogenous option could be diverse in
all of the areas that we are working with. This helps us to get the
information in the right places as we go.
The next kind of database that we want to work with is the cloud database.
This is going to be a database that has been improved or worked on for the
virtualized domain. This is going to either be half of a cloud, a private
cloud, or it could be open cloud as well.
This kind of database is important because it is going to provide you with a
ton of advantages. For example, you can pay for the amount of transfer
speed and the capacity limit that you are looking for on each of the
utilizations that you want along the way, and they are going to provide us
with a lot of the versatility that we need for any kind of the databases that
we want to work with.
In addition to all of this, we will find that the cloud database is going to
offer us some undertakings the chances to handle applications of a business
type as a product of the administration and what it wants to see. And it is
going to store that information as you need it, without pushing it onto your
own servers along the way.
Next on the list is going to be the NoSQL database. These are going to be a
good database that you can work with, the ones that are valuable for some
really big arrangements when you want to distribute your information.
These databases are going to be good when you would like to get
information execution that is going to give us that the relational databases
are not able to handle.
These kinds are going to be the best when the company using them has to
take a bunch of information that is unstructured or information that has
been saved in a lot of virtual servers, and we need to analyze it.
We can also work with some of the object-oriented databases along the way
as well. Things that are made when it comes to utilizing object-oriented
programming languages are going to be put away into some of the
relational databases, but these are going to be the right kinds of databases
that we need.
For example, a sight and sound record in our relational databases could end
up as an information object that is quantifiable, rather than working with
one that is more alphanumeric.
Then there is the graph database as well. This kind of database is going to
be graph-oriented, is going to be similar to the NoSQL database that will
work with the graph hypotheses in order to store, guide, and then query any
of the connections that we need. The best way for us to think about this
kind of database is that it is a big assortment of edges and hubs, and all of
the hubs are going to speak to one of the elements, and then the edges are
going to speak back to the association that will happen between those hubs.
These graph databases are often not used as much as the others, but they are
starting to come into popularity thanks to how they can help with breaking
down some of the interconnections that are there.
For example, it is not uncommon for a company to utilize a graph database
to help mind information that pertains to their clients from some of the
work they do online. It is also common for this kind of database to utilize a
language that is known as SPARQL. This language is a bit different, but it
allows us to examine graphs and the databases that use them a bit more.
Creating a Database with SQL
Under Unix, database names are case-touchy (not at all like SQL
watchwords), so you should consistently allude to your database as the
zoological display, not as Zoo, Zoo, or some other variation. This is
likewise valid for table names. (Under Windows, this confinement doesn't
have any significant bearing, despite the fact that you should allude to
databases and tables utilizing the equivalent letter case all through a given
query. In any case, for an assortment of reasons, the prescribed best practice
is consistently to utilize the equivalent letter case that was utilized when the
database was made.)
Making a database doesn't choose it for use; you should do that
unequivocally. To make the zoological display the present database, utilize
this announcement:
Your database should be made just once, yet you should choose it to utilize
each time you start a MySQL session. You can do this by giving a
Utilization articulation as appeared in the model. On the other hand, you
can choose the database on the direct line when you summon MySQL.
Simply indicate its name after any association parameters that you may
need to give.
Removing a database with SQL
With the SQL Server The executive's Studio, you can right tap on the
database and select "Erase."
In the erase object window, select the choice "Erase reinforcement and
reestablish history data for databases" in the event that you need to evacuate
this data.
On the off chance that you need to kick out open associations with your
database, select the "Nearby existing associations." It will be difficult to
expel the database on the off chance that you don't choose the last choice,
and there are as yet open associations with your database. You will get a
mistake that the database is still being used and can't be erased
When you hit the alright catch, the database will be evacuated off the SQL
example, and the database documents on the operating system level will
likewise be expelled. Unquestionably not important to close down the entire
occurrence to evacuate a database.
Presently after the expulsion, despite everything you have some additional
cleanup stuff to do that individuals regularly overlook…
Erase the occupations
Erase the occupations that were identified with the database. In the event
that you won't expel them, the employments will fall flat, and you will get
pointless alarms.
Erase the reinforcement documents
In the event that you needn't bother with the reinforcement documents any
longer, simply expel them. Be that as it may, I would prescribe to keep the
last full reinforcement of the database and file it for in any event a year or 2.
No one can really tell that someone needs the information later on… ☺
Erase the logins without DB client
Your database had likely some database clients designed that were
connected to a server login.
In the event that that server login isn't utilized for some other database
client and isn't individual from any server job other than open, I would
prescribe to expel that login. For security reasons as well as to keep your
server clean.
Schema Creation with SQL
A client can make any number of schemas. The schema that has been made
has a place with the current client; in any case, it tends to be relegated to
another client or job with the ALTER SCHEMA explanation.
The information volume of the items within a schema can be restricted
using amounts as in the Schema Portion segment.
At the point when you make another schema, you certainly open this new
schema. This implies this new schema is set as the CURRENT_SCHEMA,
and any further items made are within this new schema.
If you have specified the alternative IF NOT EXISTS, at that point, no
mistake message is tossed if a schema with a similar name as of now exists.
Likewise, the specified schema is opened regardless of whether it, as of
now, exists.
The USING choice in a virtual schema specifies the connector UDF
content, which at that point, characterizes the substance of the virtual
schema. Using the WITH condition, you can specify certain properties that
will be utilized by the connector content.

);

Inserting Data Into Table with SQL


At the point when we have just a couple of lines of information, regularly,
the most straightforward path is to include them physically. We can do this
by using the Addition proclamation:
Simply put in a couple of more seconds auditing the syntax:
INSERT INTO is the SQL watchword.
test_results is the name of the table that we need to place the
information into.
VALUES is another SQL catchphrase.
Then the real information lines are coming individually –
every one of them among brackets and isolated with
commas.
The field esteems are isolated with commas.
Watch out for the Content and Date information types in light
of the fact that these need to go between punctuations!
And remember the semicolon toward the finish of the entire
articulation!
5.8 Populating a Table with New Data with SQL
As you most likely are aware, tables that are found in our database of social
are going to denote substances. For example, all of the columns that are
found in our table known as Client, will hold onto the information that only
goes to that specific client; a line in ORDER_HEADER speaks to an
unmistakable request, etc. Ordinarily, the presence of another "reality"
element calls for embedding another column. For instance, you would
require another line in the Client table if Top, Inc. acquired another client;
you have to embed a column into the ORDER_HEADER table when a
client makes a request; another line must be added to the Item table if Top
begins selling another item, etc.
Inserting Data into Specific Columns with SQL
The circumstance when you need to embed a line with Invalid qualities for
specific segments isn't abnormal. As you most likely are aware, Invalid is
utilized when the worth is obscure or nonapplicable. For instance, assume
you realize Top beginnings selling another item Tidy Timber 30 ×40 ×50,
however, we have to remember that some of the properties that are found
with this item are still going to be obscure, including the weight and the
cost. Because of this, we would want to take the time to record the table
known as Item using an Addition articulation.
Inserting Null Values with SQL
Using the SQL INSERT is also used for the NULL to be inserted into
columns.
Using Order By with SQL
The SQL Request BY provision is utilized to sort the information in rising
or diving requests, in view of at least one section. A few databases sort the
inquiry brings about a climbing request as a matter of course.
You can utilize more than one segment in the Request BY statement. Ensure
whatever segment you are using to sort that segment ought to be in the
section list.
The Where Clause with SQL
They are that as it may, times when we need to confine the inquiry results to
a specified condition. The SQL WHERE statement proves to be useful in
such circumstances.
WHERE condition Syntax
The essential syntax for the WHERE condition when utilized in a SELECT
articulation is as per the following.
SELECT * FROM table Name WHERE condition;
HERE
•"SELECT * FROM tableName" is the standard SELECT articulation
•"WHERE" is the catchphrase that limits our select inquiry result set, and
"condition" is the channel to be applied to the outcomes. The channel could
be a range, single esteem, or sub-question.
We should now take a gander at a down to earth model.
Assume we need to get a part's close to home subtleties from individuals
table given the enrollment number 1; we would utilize the accompanying
content to accomplish that.
SELECT * FROM 'individuals' WHERE 'membership_number' = 1;
5.13 DDL in SQL
DDL or Data Definition Language really comprises of the SQL directions
that can be utilized to characterize the database schema. It just manages
depictions of the database schema and is utilized to make and modify the
structure of database questions in the database.
Instances of DDL directions:
CREATE – is utilized to make the database or its articles
(like a table, file, work, views, store methodology, and
triggers).

DROP – is utilized to erase objects from the database.

ALTER-is utilized to alter the structure of the database.

5.14 Applying DDL Statements with SQL


SQL's Data Definition Language (DDL) manages the structure of a
database. It's unmistakable from the Data Control Language, which
manages the data contained inside that structure. The DDL comprises of
these three affirmations:
CREATE: You utilize the different types of this announcement to fabricate
the basic structures of the database.
ALTER: You utilize this announcement to change the structures that you
have made.
DROP: You apply this announcement to structures made with the Make
articulation, to annihilate them.
Make
You can apply the SQL Make articulation to countless SQL objects,
including compositions, spaces, tables, and perspectives. By utilizing the
Make Diagram proclamation, you can make a construction, yet in addition,
distinguish its proprietor and indicate a default character set. Here's a case
of such an announcement:
Utilize the Make Area explanation to apply imperatives to segment esteems.
The limitations you apply to a space figure out what protests the area can
and can't contain. You can make spaces after you set up a composition.
You make tables by utilizing the Make TABLE explanation, and you make
sees by utilizing the Make VIEW articulation. At the point when you utilize
the Make TABLE explanation, you can indicate requirements on the new
table's segments simultaneously.
You additionally have to Make CHARACTER SET, CREATE
Resemblance, and Make Interpretation explanations, which give you the
adaptability of making new character sets, gathering groupings, or
interpretation tables. (Examination successions characterize the request
where you do correlations or sorts. Interpretation tables control the change
of character strings starting with one character set then onto the next.)
Chapter 12 Tips and tricks of SQL
SQL stands for structured query language. This language is a domain
specific language that you are going to use if you are programming or
trying to manage data inside of a RDBMS (relational database management
system).
SQL was started with math, both tuple relational calculus and relational
algebra. There is a lot of data definitions and manipulations along with
control language that is going to be inside of SQL. SQL involves the use of
things such as delete, update, insert, and query.
In essence, you are going to be able to update, delete, insert, and search for
the things that you are going to be putting into the program. It is very
common for SQL to be described as a declarative language, however, the
program also allows for procedural elements.
This is one of the first languages that was able to use the relational model
that was created by Edgar F Codd. Although it is not going to work with all
of the rules that are set forth for this model, it is one of the most widely
used languages for data bases.
In ’86, SQL became part of the ANSI. Then, in ’87 it became part of the
ISO. However, there have been updates since then that have made it to
where the language can include larger sets. Just keep in mind that the code
for SQL is not going to be one hundred percent portable between data bases
unless there are some adjustments to the code so that it fits the requirements
for that data base.
Learning SQL can be one of the better decisions that you make about your
career because you can push yourself forward with it that way that you can
rely on using your own knowledge rather than having to go to someone else
for their knowledge. In fact, people are going to be coming to you to learn
what it is that you know about the program.
By learning SQL, you are going to be able to do more than you may have
been able to before. Here are a few things that are going to give you a good
reason as to why you should learn SQL.
Money
Learning SQL makes it to where you have the opportunity to earn some
extra money. Developers that work with SQL earn around $92,000 a year!
An administrator for an SQL data base is going to make about $97,000 a
year. So, just learning SQL makes it to where you are able to earn around
twice as much as what the average American household is going to make in
a year.
Highly sought after
Employers are wanting people who know SQL! The more knowledge that
you have about SQL the more sought after you are going to by employers.
Knowing SQL is not only going to benefit you but your employer as well
because they are not going to have to pay for you to learn the program. The
interviewing process is going to be better than any other process that you
have gone through and you may find that they are going to be willing to
give you more money just for knowing SQL over the other person. With
SQL knowledge, you are going to be opening yourself up for more careers
than you might have been able to apply for before.
Four Tips That Make Using SQL Easier!
1. Changing the language on the user interface: close out the
program if you have it open and then go to the installation
folder. You will right click on the short cut that is on your
desk top and open the file location. From there you will open
the SQL developer folder and then the first folder that is
listed will need to be opened nexted. The next thing that you
are going to click on is the SQL developer.conf. You are
going to be adding in a new setting inside of the text that is
already there to change the language to what it is that you are
wanting to see. You can put this new setting anywhere.
Putting a comment in the code is going to be a good idea so
that you know what you have done if you have to get back
into it at a later date. You will AddVMOption before adding
in the Duser.lanaguage and you can set it to any language that
you are wanting. Now reopen your SQL developer and it will
be in the language that you want it in.
2. Constructdata base connections: right click on the connection
on the left of the screen and click on new connection. You
will need to title the connection whatever it is that you want.
You will need to enter the usertitle and password for it. You
should change the color if you are going to be working with
multiple connections at once. In the role you are going to
change the role if you are using a system connection title.
You can leave the home host alone if you are using your
home computer. However, if you are using a different
location, you will need to input the IP address for where the
system is going to be running. Leave your part alone and xe
should be left alone as well unless you are not working with
an express edition of SQL. You can test the connection and if
it is successful, you can close the connection down and you
have created your connection. If everything is correct it is
going to open with no errors and you are going to be able to
put in SQL code.
3. Disabling features: there are a lot of features that SQL offers
and if you do not use them, then you should disable them so
that they are not slowing down the developer. You will go to
the tools menu and go down to the features option. Each
feature has different folders, it is up to you to decide which
features you want to keep running and which ones you want
to disable. You can expand each folder down so that you are
able to see what each folder contains. All you are going to do
is uncheck the feature and it will turn that feature off and
cause the system to start to run faster. Be sure that you are
going to apply the changes so that they are not turning
themselves back on without you turning them on yourself.
4. Executing commands and scripts: use the tool bar that is at
the top of the developer and press the play button. Make sure
that you have added in your semi colon. You can also use ctrl
and enter so that you are not having to pull your hand off the
keyboard. To run a script, you are going to you can use the
toolbar again just select run scrpts so you run both
commands. Or, press the F5 key if that is easier for you.
Should your file be external use the at sign and the path file
to import it and run it.
Chapter 13 Database Components
Now that you know more about a database’s use in the real world and why
you may want to learn SQL, we will dive into the components of the
database.
These components or items within a database are typically referred to as
“objects”. These objects can range from tables, to columns, to indexes and
even triggers. Essentially, these are all of the pieces of the data puzzle that
make up the database itself.
Database Tables
Within the database are tables, which hold the data itself. A table consists of
columns that are the headers of the table, like First_Name and Last_Name,
for example. There are also rows, which are considered an entry within the
table. The point to where the column and row intersect is called a cell.
The cell is where the data is shown, like someone’s actual first name and
last name. Some cells may not always have data, which is considered
NULL in the database. This just means that no data exists in that cell.
In Microsoft’s SQL Server, a table can have up to 1,024 columns, but can
have any number of rows.
Schemas
A schema is considered a logical container for the tables. It’s essentially, a
way to group tables together based on the type of data that they hold. It
won’t affect an end user who interacts with the database, like someone who
runs reports. But one who works directly with the database, like a database
administrator or a developer, will see the available schemas.
Consider a realistic example of several tables containing data for Sales
records. There may be several tables named Sales_Order, Sales_History or
Sales_Customer. You can put all of these Sales tables into a “Sales” schema
to better identify them when you work directly with the database.
Columns
Column is a header within a table that is defined by a data type. The data
type specifies the type(s) of data that can be held in that specific cell, i.e.
where the row and column meet.
Remember that you can only have up to 1,024 columns in a given table in
SQL Server!
Rows and NULL values
A row is considered an entry in a table. The row will be one line across, and
typically have data within each column. Though, in some cases, there may
be a NULL value in one or many cells.
Back to our example of names, most people have first and last names, but
not everyone has a middle name. In that case, a row would have values in
the first and last name columns, but not the middle name column, like
shown below.

Primary Keys
A primary key is a constraint on a column that forces every value in that
column to be unique. By forcing uniqueness on values in that column, it
helps maintain the integrity of the data and helps prevent any future data
issues.
A realistic example of a primary key would be an employee ID or a sales
record ID. You wouldn’t want to have two of the same employee ID’s for
two different people, nor would you want to have two or more of the same
sales record ID’s for different sales transactions. That would be a nightmare
when trying to store and retrieve data!
You can see in the below example that each value for BusinessEntityID is
unique for every person.
Foreign Keys
Another key similar to the primary key is a foreign key. These differ from
primary keys by not always being unique and act as a link between two or
more tables.
Below is an example of a foreign key that exists in the
AdventureWorks2012 database. The foreign key is ProductID in this table
(Sales.SalesOrderDetail):

The ProductID in the above table is linking to the ProductID (primary key)
in the Production.Product table:

Essentially, foreign keys will check its link to the other table to see if that
value exists. If not, then you will end up receiving an error when trying to
insert data into the table where the foreign key is.
Constraints
Primary keys and foreign keys are known as constraints in the database.
Constraints are “rules” that are set in place as far as the types of data that
can be entered. There are several others that are used aside from primary
keys and foreign keys that help maintain the integrity of the data.
UNIQUE – enforces all values in that column to be different. An example
of this could be applied to the Production.Product table. Each product
should be different, since you wouldn’t want to store the same product
name multiple times.

NOT NULL – ensures that no value in that column is NULL. This could
also be applied to the same table as above. In this case, the ProductNumber
cannot have a NULL value, as each Product should have its own
corresponding ProductNumber.
DEFAULT – sets a default value in a column when a value is not provided.
A great example of this would be the ListPrice column. When a value isn’t
specified when being added to this table, the value will default to 0.00. If
this value were to be calculated in another table and be a NULL value (like
a sales table where sales from the company are made), then it would be
impossible to calculate based on a NULL value since it’s not a number.
Using a default value of 0.00 is a better approach.

INDEXES – Indexes are constraints that are created on a column that


speeds up the retrieval of data. An index will essentially compile all of the
values in a column and treat them as unique values, even if they’re not. By
treating them as unique values, it allows the database engine to improve its
search based on that column.
Indexes are best used on columns that:

1. Do not have a unique constraint


2. Are not a primary key
3. Or are not a foreign key
The reason for not applying an index to a column that satisfies any of the
above three conditions, is that these are naturally faster for retrieving data
since they are constraints.
As an example, an index would be best used on something like a date
column in a sales table. You may be filtering certain transaction dates from
January through March as part of your quarterly reports, yet see many
purchases on the same day between those months. By treating it as a unique
column, even the same or similar values can still be found much quicker by
the database engine.
Views
A view is a virtual table that’s comprised of one or more columns from one
or more tables. It is created using a SQL query and the original code used to
create the view is recompiled when a user queries that table.
In addition, any updates to data made in the originating tables (i.e. the
tables and columns that make up the view) will be pulled into the view to
show current data. This is another reason that views are great for reporting
purposes, as you can pull real-time data without touching the origin tables.
For best practices, DO NOT update any data in a view. If you need to
update the data for any reason, perform that in the originating table(s).
To expand a little bit on why a view would be used is the following:

1. To hide the raw elements of the database from the end-user


so that they only see what they need to. You can also make it
more cryptic for the end-user.
2. An alternative to queries that are frequently run in the
database, like reporting purposes as an example.
These are only a few reasons as to why you would use a view. However,
depending on your situation, there could be other reasons why you would
use a view instead of writing a query to directly obtain data from one or
more tables.
To better illustrate the concept of a view, the below example has two tables:
‘People’ and ‘Locations’. These two tables are combined into a view that is
called ‘People and Locations’ just for simplicity. These are also joined on a
common field, i.e. the LocationID.

Stored Procedures
Stored procedures are pre-compiled SQL syntax that can be used over and
over again by executing its name in SQL Server. If there’s a certain query
that you’re running frequently and writing it from scratch or saving the file
somewhere and then opening it to be able to run it, then it may be time to
consider creating a stored procedure out of that query.
Just like with SQL syntax that you’d write from scratch and passing in a
value for your WHERE clause, you can do the same with a stored
procedure. You have the ability to pass in certain values to achieve the end
result that you’re looking for. Though, you don’t always have to pass a
parameter into a stored procedure.
As an example, let’s say that as part of the HR department, you must run a
query once a month to verify which employees are salary and non-salary, in
compliance with labor laws and company policy.
Instead of opening a file frequently or writing the code from scratch, you
can simply call the stored procedure that you saved in the database, to
retrieve the information for you. You would just specify the proper value
(where 1 is TRUE and 0 is FALSE in this case).
EXEC HumanResources.SalariedEmployees @SalariedFlag = 1
In the result set below, you can see some of the employees who work in a
salary type position:

Triggers
A trigger in the database is a stored procedure (pre-compiled code) that will
execute when a certain event happens to a table. Generally, these triggers
will fire off when data is added, updated or deleted from a table.
Below is an example of a trigger that prints a message when a new
department is created in the HumanResources.Department table.
--Creates a notification stating that a new department has been created
--when an INSERT statement is executed against the Department table
CREATE TRIGGER NewDepartment
ON HumanResources.Department
AFTER INSERT
AS RAISERROR ('A new department has been created.', 10, 9)
To expand on this a little more, you specify the name of your trigger after
CREATE TRIGGER. After ON, you’ll specify the table name that this is
associated with.
Next, you can specify which type of action will fire this trigger (you may
also use UPDATE and/or DELETE), which is known as a DML trigger in
this case.
Last, I’m printing a message that a new department has been created and
using some number codes in SQL Server for configuration.
To see this trigger in the works, here’s the INSERT statement I’m using to
create a new department. There are four columns in this table,
DepartmentID, Name, GroupName and ModifiedDate. I’m skipping the
DepartmentID column in the INSERT statement because a new ID is
automatically generated by the database engine.
--Adding a new department to the Department's table
INSERT INTO HumanResources.Department
(Name, GroupName, ModifiedDate)
VALUES
('Business Analysis', 'Research and Development', GETDATE()) --
GETDATE() gets the current date and time, depending on the data type
being used in the table
The trigger will prompt a message after the new record has been
successfully inserted.
A new department has been created.
(1 row(s) affected)
If I were to run a query against this table, I can see that my department was
successfully added as well.
Chapter 14 Working With Subqueries
In SQL programming, a subquery is also referred to as a nested query or an
inner query. Subqueries are defined as queries within other SQL queries.
They are customarily embedded inside the WHERE clause.
The purpose of subqueries is returning data that will be used in the
significant queries as conditions for further restricting of data being
retrieved. Subqueries can be used in coordination with SELECT, INSERT,
UPDATE, and DELETE statements. They can also be used along with
operators such as, IN, =, <=,>=, and BETWEEN.
There are rules that are set to be followed by subqueries. They include;
All subqueries should be enclosed inside parentheses
Each subquery can only contain one column within the SELECT clause.
This is possible unless there are many columns existing in the significant
query for the subqueries to compare their selected columns.
Subqueries that are returning multiple rows can only be used together with
the numerous value operators. Such operators include the IN operator.
The SELECT list must not include any references evaluating to a CLOB,
ARRAY, NCLOB, or BLOB.
It is not possible for subqueries to be enclosed in a set of functions.
It is not applicable for the BETWEEN operator to be used with a subquery.
It is, however, possible for the BETWEEN operator to be used in a
subquery.
In most frequent cases, subqueries are used together with the SELECT
statement. There are also instances when the subqueries are used along with
the INSERT Statements. The INSERT statements make use of the data
returned from the subqueries. This data is used when inserting into other
tables. Data that is selected in the subqueries is modified using different
characters, date, and number functions.
Subqueries are also used in coordination with the UPDATE statements. One
or many columns existing in a table are updated by the use of a subquery
together with the UPDATE statement. Subqueries are also applicable
together with the DELETE statements. In such a case, they are used in
deleting records from existing tables that are no longer valuable. When the
two are used together, they bring about some changes to the existing table’s
columns and rows.
There is no specified syntax for use in subqueries. However, in most cases,
subqueries are used with the SELECT statements as indicated below.
SELECT column_name
FROM table_name
WHERE column_name
Expression operator
(SELECT COLUMN_ NAME from TABLE NAME
WHERE…)
The SQL Server Subquery
It is essential for programmers to understand SQL Server subqueries and
how subqueries are used for querying data. SQL servers help in executing
whole queries. In the case of customers’ tables, it first comes up with lists
of customers’ IDs. It then comes up with substitution of the identification
numbers that are returned by subqueries within the IN operator. It then
engages in executing outer queries to help in getting the final outcomes set.
Using subqueries can help programmers join two or more steps together.
This is because they allow for the elimination of the need to select the
identification numbers of customers and plug them within the outer queries.
Additionally, the questions themselves adjust automatically anytime there
are changes in the customer’s data.
Subqueries can also be nested in other subqueries. SQL programming
servers support over 30 levels of nesting. The SQL server subqueries are
used in place of expressions. When the subqueries return single values, they
are used anywhere expressions are used. SQL server subqueries are used
together with the IN operator. Subqueries used with this operator usually
return zero or more value sets. The outer queries make use of the values that
have been answered by the central subqueries.
SQL server subqueries are also used with the ANY operator. Subqueries
that are introduced using the ANY operators usually have the following
syntax:
Scalar_expression
Comparison_operator ANY
(subquery)
When the subqueries return lists of values such as v1, v2, v3, the ANY
operator statement usually return TRUE if there is a comparison of one pair
and FALSE when it does not. The ALL operator, on the other hand, returns
TRUE in case all the comparisons made pair, and when they do not, it
returns FALSE. THE EXIST operators return TRUE when subqueries
return results, but when they do not, they return FALSE.
Creating New Databases in SQL Programming
When creating a database in SQL programming, the initial queries are
responsible for the creation of new databases. One of the best examples is
the Facebook application. The application contains some databases for all
the following components.
Users- This is a database on Facebook that is used as a storage for every
information on the user’s profile. It stores all the details as the person
uploads them on their accounts.
Interests- The database on Facebook helps in holding various interests of
the user. These interests are applied when tracking down the hobbies and
talents of the users.
Geographic Locations- This is a database that holds every city around the
universe where any user live.
The second query when creating a database is responsible for coming up
with new tables within specific databases.
Industries That Use SQL Programming
SQL programming databases are commonly applied in technological fields
whereby large amounts of data are used. Some of the most common sectors
are the finance, music applications, and social media platforms industries.
In the finance industries, SQL is mainly used in payments processors, and
banking applications. They include Stripe, whereby they operate and store
data involving significant financial transactions as well as users. All these
processes are supported by complex databases. Banking database systems
usually require maximum security. This is, therefore, one of the reasons
why the SQL code applied has the highest levels and capabilities of risk
compliance.
Some of the music applications such as Pandora and Spotify also require
the use of intensive databases. These databases play significant roles
because they help the applications to store large libraries consisting of
music albums and files. The music stored in there is from different artists.
Therefore, these databases are used when finding what users are trying to
look for, storing the data about particular users as well as their preferences,
and interests.
Social media platforms are other industries that commonly use SQL
programming. This is because they require multiple data processing. Some
of the social media apps such as Snapchat, Facebook, and Instagram are
using SQL to enhance the storage of the profile information of their users.
Such information includes biography, location, and interests. SQL is also
applied in these applications to improve efficient updating of the
application’s database anytime a user comes up with some new posts, or
share some photos. It also allows for the recording of messages that are
generally sent from one user to the other. By this, it helps the users to
retrieve the messages posted and reread them in the future.
Common SQL Database Systems
The SQL database systems are typically ranked depending on the DB-
Engines popularity score. Below are some of the variables that are taken
into consideration during the rankings.
The number of time the system has been mentioned on websites. This is
measured in terms of the outcomes in queries on the search engines.
The general interest within the system. This considers how frequently it has
been searched in Google Trends.
The frequency of technical discussions on the particular database system.
The number of job offers through which the database system has been
mentioned.
The number of profiles existing in professional networks whereby the
system has been mentioned.
The Relevance of the Database System to Social Networks
1. Oracle Database
This is the most common SQL database system used all over the world
today. Numerous industries are using it in their operations. It is, however,
commonly used in the processing of online transactions and data
warehousing.
2. MYSQL Database
It is one of the open-source database systems in SQL. It is freely available
to businesses and individuals. Sit is popularly used by small scale
businesses and startups. They commonly use it because it does not have a
license fee. It is also used in multiple applications and software programs
that are open-source in nature.
3. Microsoft SQL Server
The SQL Server is Microsoft’s modified database management system. It is
used in the running on all main versions of the Windows operating systems.
It is also used in the consumer software and web servers running on
Windows. This means that the Microsoft SQL server has an extensive user
base.
4. POSTGRESQL
It is also a free open source database system. It is commonly used in
multiple industries due to free license models.

Conclusion
As mentioned above, SQL is used in different sectors globally and applied
in different areas to help in data management. One of them includes index
structures modifications that encompass the creation of pathways that helps
in the collection of data that quickly trace information of interest. SQL can
also be applied as a technique to modify and change database tables. That
is, it helps in keeping the data stored up to date, therefore, eliminating
instances of outdated data, which are often misleading.

You might also like