Chapter 4 - 2019
Chapter 4 - 2019
Chapter 4 - 2019
4.1 Introduction to Languages, IDE’s, Tools and Technologies used for implementaion.
Language: The project totally uses python only from image processing to storing the details. Python
to be highly readable. It uses English keywords frequently where as other languages use
punctuation, and it has fewer syntactical constructions than other languages. Python is a multi-
fully supported, and many of its features support functional programming and aspect-oriented
paradigms are supported via extensions, including design by contract and logic programming
Python features a comprehensive standard library, and is referred to as "batteries included" Its
IDE: IDE (short for integrated development environment or integrated development and learning
environment) is an integrated development environment for Python, which has been bundled with
the default implementation of the language since 1.5.2b1. It is packaged as an optional part of the
Python packaging with many Linux distributions. One aim of the IDE is to reduce the configuration
necessary to piece together multiple development utilities, instead it provides the same set of
capabilities as one cohesive unit. Reducing setup time can increase developer productivity,
especially in cases where learning to use the IDE is faster than manually integrating and learning
all of the individual tools. Tighter integration of all development tasks has the potential to improve
overall productivity beyond just helping with setup tasks. For example, code can be continuously
parsed while it is being edited, providing instant feedback when syntax errors are introduced.
Allowing developers to debug code much faster and easier with an IDE It is completely written in
Python and the Tkinter GUI toolkit (wrapper functions for Tcl/Tk). The project uses ’Spyder’ which
comes pre installed in Anaconda IDE. Spyder is an open source platform integrated development
environment for scientific programming in the Python language. Spyder integrates NumPy, SciPy,
Matplotlib and IPython, as well as another open source software. Other than spyder IDLE or
notepad++ is also used at some instance for quick execution and testing in different conditions.
Libraries: This project uses many python libraries for different functions. It uses Opencv and
imutils for image processing, scikit for machine learning task, numpy for mathematical computing
Open CV: OpenCV is a cross-platform library using which we can develop real-time computer
vision applications. It mainly focuses on image processing, video capture and analysis including
features like face detection and object detection. To capture an image, we use devices like cameras
and scanners. These devices record numerical values of the image (Ex: pixel values). OpenCV is a
library which processes the digital images; therefore, we need to store these images for processing.
The Mat class of OpenCV library is used to store the values of an image. It represents an n-
dimensional array and is used to store image data of grayscale or color images, voxel volumes,
This class comprises of two data parts: the header and a pointer
Header − Contains inform ation like size, method used for storing, and the address of the matrix
(constant in size).
Python's development is conducted largely through the Python Enhancement Proposal (PEP) process, the
primary mechanism for proposing major new features, collecting community input on issues and
documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are
reviewed and commented on by the Python community and Guido Van Rossum, Python's Benevolent
The mailing list python-dev is the primary forum for the language's development. Specific issues are
discussed in the Roundup bug tracker maintained at python.org. Development originally took place on
a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017.
CPython's public releases come in three types, distinguished by which part of the version number is
incremented:
Backward-incompatible versions, where code is expected to break and need to be manually ported. The
first part of the version number is incremented. These releases happen infrequently—for example,
Major or "feature" releases, about every 18 months, are largely compatible but introduce new features.
The second part of the version number is incremented. Each major version is supported by bugfixes for
Bugfix releases, which introduce no new features, occur about every 3 months and are made when a
sufficient number of bugs have been fixed upstream since the last release. Security vulnerabilities are
also patched in these releases. The third and final part of the version number is incremented.
Many alpha, beta, and release-candidates are also released as previews and for testing before final releases.
Although there is a rough schedule for each release, they are often delayed if the code is not ready. Python's
development team monitors the state of the code by running the large unit test suite during development,
The community of Python developers has also contributed over 86,000 software modules (as of
20 August 2016) to the Python Package Index (PyPI), the official repository of third-party Python libraries.
The major academic conference on Python is PyCon. There are also special Python mentoring programmes,
such as Pyladies.There are a lot of different naming styles. It helps to be able to recognize what naming
style is being used, independently from what they are used for.
lowercase
lower_case_with_underscores
UPPERCASE
UPPER_CASE_WITH_UNDERSCORES
CapitalizedWords (or CapWords, or CamelCase -- so named because of the bumpy look of its letters).
Note: When using acronyms in CapWords, capitalize all the letters of the acronym. Thus,
Capitalized_Words_With_Underscores (ugly!)
There's also the style of using a short unique prefix to group related names together. This is not used much
in Python, but it is mentioned for completeness. For example, the os.stat() function returns a tuple whose
items traditionally have names like st_mode, st_size, st_mtime and so on. (This is done to emphasize the
correspondence with the fields of the POSIX system call struct, which helps programmers familiar with
that.)
The X11 library uses a leading X for all its public functions. In Python, this style is generally deemed
unnecessary because attribute and method names are prefixed with an object, and function names are
prefixed with a module name. In addition, the following special forms using leading or trailing underscores
are recognized (these can generally be combined with any case convention):
_single_leading_underscore: weak "internal use" indicator. E.g. from M import * does not import
numbered nodes (either circles or rectangles) representing events, or milestones in the project linked
by labelled vectors (directional lines) representing tasks in the project. The direction of the arrows
on the lines indicates the sequence of tasks. Tasks that must be completed in sequence but that don't
require resources or completion time are considered to have event dependency. These are
represented by dotted lines with arrows and are called dummy activities.
Open Proj OpenProj is a free, open source desktop alternative to Microsoft Project. The OpenProj
solution is ideal for desktop project management and is available on Linux, Unix, Mac or
2. Gantt chart
3. PERT graph
Gantt Chart is a type of a bar chart that is used for illustrating project schedules. Gantt charts can
be used in any projects that involve effort, resources, milestones and deliveries. At present, Gantt
charts have become the popular choice of project managers in every field. Gantt charts allow project
managers to track the progress of the entire project. Through Gantt charts, the project manager can
keep a track of the individual tasks as well as of the overall project progression. In addition to
tracking the progression of the tasks, Gantt charts can also be used for tracking the utilization of
the resources in the project. These resources can be human resources as well as materials used.
Gantt chart was invented by a mechanical engineer named Henry Gantt in 1910. Since the
invention, Gantt chart has come a long way. By today, it takes different forms from simple paper-
based charts to sophisticated software packages. Gantt charts are used for project management
purposes. In order to use Gantt charts in a project, there are a few initial requirements fulfilled by
the project. First of all, the project should have a sufficiently detailed Work Breakdown Structure
(WBS). Secondly, the project should have identified its milestones and deliveries. In some
instances, project managers try to define the work break down structure while creating Gantt chart.
This is one of the frequently practised errors in using Gantt charts. Gantt charts are not designed to
assist WBS process; rather Gantt charts are for task progress tracking. Gantt charts can be
successfully used in projects of any scale. When using Gantt charts for large projects, there can be
Gant chart
Flow Chart
Software testing is a fairly straightforward activity, in theory. For every input, there is a defined and known
output. We enter values, make selections, or navigate an application, then compare the actual result with
the expected one. If they match, we nod and move on. If they don’t, we possibly have a bug. Granted,
sometimes an output is not well-defined, there is some ambiguity, or you get disagreements about whether
a particular result represents a bug or something else. But in general, we already know what the output is
supposed to be. But there is a type of software where having a defined output is no longer the case: machine
learning systems. Most machine learning systems are based on neural networks, or sets of layered
algorithms whose variables can be adjusted via a learning process. The learning process involves using
known data inputs to create outputs that are then compared with known results. Some basic code testing
Code Coverage: Tests tell you when the code you’re testing doesn’t work the way you thought it
would, but they don’t tell you a thing about the code that you’re not testing. They don’t even tell
you that the code you’re not testing isn’t being tested. Code coverage is a technique, which can be
used to address that shortcoming. A code coverage tool watches while your tests are running, and
keeps track of which lines of code are (and aren’t) executed. After the tests have run, the tool will
give you a report describing how well your tests cover the whole body of code. It’s desirable to have
the coverage approach 100%, as you probably figured out already. Be careful not to focus on the
coverage number too intensely though, it can be a bit misleading. Even if your tests execute every
line of code in the program, they can easily not test everything that needs to be tested. That means
you can’t take 100% coverage as certain proof that your tests are complete. On the other hand, there
are times when some code really, truly doesn’t need to be covered by the tests—some debugging
support code, for example—and so less than 100% coverage can be completely acceptable. Code
coverage is a tool to give you insight into what your tests are doing, and what they may be
Version control hooks: Version control systems are programs for keeping track of changes to a
source code tree, even when those changes are made by different people. In a sense, they provide a
universal undo history and change log for the whole project, going all the way back to the moment
you started using the version control system. They also make it much easier to combine work done
by different people into a single, unified entity, and to keep track of different editions of the same
project. You can do all kinds of things by installing the right hook programs, but we’ll only focus
on one use. We can make the version control program automatically run our tests, when we commit
a new version of the code to the version control repository. This is a fairly nifty trick, because it
makes it difficult for test-breaking bugs to get into the repository unnoticed. Somewhat like code
coverage, though there’s potential for trouble if it becomes a matter of policy rather than simply
being a tool to make your life easier. In most systems, you can write the hooks such that it’s
impossible to commit code that breaks tests. That may sound like a good idea at first, but it’s really
not. One reason for this is that one of the major purposes of a version control system is
communication between developers, and interfering with that tends to be unproductive in the long
run. Another reason is that it prevents anybody from committing partial solutions to problems,
which means that things tend to get dumped into the repository in big chunks. Big commits are a
problem because they make it hard to keep track of what changed, which adds to the confusion.
There are better ways to make sure you always have a working codebase socked away somewhere,
One important thing to note is that the training data itself could contain inaccuracies. In this case, because
of measurement error, the recorded wind speed and direction could be off or ambiguous. In other cases, the
cooling of the filament likely has some error in its measurement. Some test types are shown below:
Non Funtional Testing: Testing the attributes of a component of system that do not relate to
Regression Testing: The most repetitive software testing occurs as regression testing, which has the
object of verifying that previously tested modules continue to function predictably following code
modification, and guarantee that no new bugs were introduced during the most recent cycle of
enhancements to the app under test. In large measure, this procedure is composed of generating test
input and monitoring output for anticipated results and failures. Current AI methods such as
classification and clustering algorithms rely on just this type of primarily repetitive data to train
models to forecast future outcomes accurately. First, a set of known inputs and verified outputs are
used to set up features and train the model. A portion of the dataset with known inputs and outputs
are reserved for testing the model. This set of known inputs are fed to the algorithm and the output
is checked against the verified outputs in order to calculate the accuracy of the model. If the accuracy
You need test scenarios. Three may well be sufficient, to represent expected best case, average case,
You will not reach mathematical optimization. We are, after all, working with algorithms that
produce approximations, not exact results. Determine what levels of outcomes are acceptable for
each scenario.
Defects will be reflected in the inability of the model to achieve the goals of the application.