Untitled

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 53

Abstract

With cloud computing becoming increasingly popular, there has been a rapid increase in the
number of data owners who outsource their data to the cloud while allowing users to retrieve the
data. To preserve the privacy of data, data owners usually encrypt their data before outsourcing
them to the cloud, and cloud servers can search across the ciphertext domain on behalf of users
without learning any information about the data. However, existing work in the literature mostly
supports only a single-user or single-keyword search which is not able to satisfy more desired
expressive search. Thus, we propose a searchable encryption primitive with attribute-based
access control for hybrid Boolean keyword search over outsourced encrypted data. There exist
several desirable features: (1) Data owners can set search permissions for outsourced encrypted
data according to an access control policy. (2) Multiple users, whose attributes satisfy the access
control policy, are allowed to perform a retrieval operation upon the encrypted data. (3)
Authorized users are able to perform more expressive search, such as any required Boolean
keyword expression search. Additionally, this primitive is provably secure under our security
model and we have also implemented the prototype to show the practicality of the primitive.
Motivation
The main motivation is to overcome the existing schemes do not support keyword searching
schemes. In such cases, data users must download, filter and process a large amount of data in
order to get relevant results, which obviously lack practicality. To address the challenge of
developing a Searchable Encryption scheme which can simultaneously enables to users can
searching over outsourced encrypted data to get only relevant encrypted documents.
Scope
Scope of this project is max it can run only in location system as we are using only local host
server for running this application even we are using only local database only
Outline
 Working with searchable encryption on outsourced encrypted data.
 More security can be provided with keyword search and attribute-based encryption.
 Only authorized user can get encrypted results from cloud by search with respective
keywords.
 Reduces the computational cost by preventing release the cloud data to non-authorized
users.
Problem Statement:
Problem Statement of this application a searchable encryption primitive with attribute-based
access control for hybrid boolean keyword search over outsourced encrypted data. There exist
several desirable features
1. Data owners can set search permissions for outsourced encrypted data according to an access
control policy.
2. Multiple users, whose attributes satisfy the access control policy, are allowed to perform a
retrieval operation upon the encrypted data.
3. Authorized users are able to perform more expressive search, such as any required boolean
keyword expression search. This primitive is secure under our security model.
Existing System
Most of the existing schemes do not support keyword searching schemes. In such cases, data
users must download, filter and process a large amount of data in order to get relevant results,
which obviously lack practicality. To address the challenge of developing a Searchable
Encryption scheme which can simultaneously enables to users can searching over outsourced
encrypted data to get only relevant encrypted documents.
Disadvantages:
 In existing system, there is a not supporting search technique on outsourced encrypted
data.
 Even not authorized people are also getting the cloud encrypted results.
 Due to this issue, increases the computational cost for providing cloud data.
Proposed System:
The proposed system allows data owners to control the search permission for their outsourced
encrypted data according to an access control policy. As long as his attributes satisfy the access
control policy, any user can perform a keyword search. This means that our primitive supports
multiuser search. In addition, every user with a set of attributes can generate a delegated key for
another user who has a more restricted set of attributes.
Advantages:
 Working with searchable encryption on outsourced encrypted data.
 More security can be provided with keyword search and attribute-based encryption.
 Only authorized user can get encrypted results from cloud by search with respective
keywords.
 Reduces the computational cost by preventing release the cloud data to non-authorized
users.
SYSTEM CONFIGURATION:
Hardware requirements:
Processer : Any Update Processer
Ram : Min 1 GB
Hard Disk : Min 100 GB
Software requirements:
Operating System : Windows family
Technology : Java (1.7/1.8)
Front-End Technologies : Html, Html-5, JavaScript, CSS.
Server : Tomcat 7/8
Database (Back – End) : My SQL5.5
IDE : EditPlus
INTRODUCTION
Cloud computing [1] is a powerful technology which uses the Internet and remote servers to
maintain massive-scale data and perform complex computing. An important application of cloud
computing is in personal health record (PHR) systems where individuals can access, manage and
share their health information [2]. Each patient is completely in control of his PHR and can
freely share his health information with a wide range of users, such as staff from healthcare
providers, family members or friends. In order to minimize storage and operational costs, many
large organizations and individual users outsource their PHRs to the cloud. However, in some
way, it directly makes patients lose control of their PHRs. In addition, the semi-trust cloud server
can smoothly view PHRs and, in some cases, PHRs may even be utilized for unauthorized
secondary use or commercial use. In order to ensure the confidentiality of sensitive PHRs, it is
necessary for patients (data owners) to encrypt their PHRs (data) before outsourcing them to the
cloud [3]. This, however, prevents users from searching outsourced encrypted data as normal
search algorithms cannot be executed in the encrypted domain. Searchable encryption (SE) is a
cryptographic technique that allows to search specific information (e.g., keyword) in an
encrypted document without learning information about the plaintext data. The key steps are as
follows. First, a data owner encrypts a set of keywords which are extracted from a document into
a keyword ciphertext and uploads both of the encrypted document and the keyword ciphertext to
the cloud. Then, when a data user needs to retrieve some document, he generates a keyword
token and sends the token to the cloud. Finally, the cloud uses a search algorithm to verify which
keyword ciphertext matches the keyword token and sends back the encrypted document with
matching keywords to the data user. Two main SE techniques are searchable symmetric
encryption (SSE) and public key encryption with keyword search (PEKS) [4]. In an SSE system,
only the secret key holder is allowed to generate keyword ciphertexts and keyword tokens.
However, in a PEKS system, any user can generate keyword ciphertexts under a data owner’s
public key, but only the private key owner can perform searches. Thus, SSE is more suitable for
a single user to write and read data, whereas PEKS is used in multiuser writing and single-user
reading application scenarios. In practice, most databases do not only serve a single user. For
example, in PHR systems, different clinicians may need to enter and access the health record of a
patient. Existing SE schemes use keyword token sharing techniques to solve the problem, such as
broadcast encryption [5], [6] and proxy re-encryption [7], but only allow a single user to write to
the database.
LITERATURE SURVEY
Authors: -D. Quick, B. Martini, and K. R. Choo,
Paper: - Cloud StorageForensics. SyngressPublishing Elsevier, 2014. Available:
http://www.elsevier.com/books/cloud-storageforensics/quick/978-0-12-419970-5
 To reduce the risk of digital forensic evidence being called into question in judicial
proceedings, it is important to have a rigorous methodology and set of procedures for
conducting digital forensic investigations and examinations.
 Digital forensic investigation in the cloud computing environment, however, is in infancy
due to the comparatively recent prevalence of cloud computing.
 Cloud Storage Forensics presents the first evidence-based cloud forensic framework.
Using three popular cloud storage services and one private cloud storage service as case
studies, the authors show you how their framework can be used to undertake research
into the data remnants on both cloud storage servers and client devices when a user
undertakes a variety of methods to store, upload, and access data in the cloud.
 By determining the data remnants on client devices, you gain a better understanding of
the types of terrestrial artifacts that are likely to remain at the Identification stage of an
investigation.
 Once it is determined that a cloud storage service account has potential evidence of
relevance to an investigation, you can communicate this to legal liaison points within
service providers to enable them to respond and secure evidence in a timely manner.
Key Features
 Learn to use the methodology and tools from the first evidenced-based cloud forensic
framework
 Case studies provide detailed tools for analysis of cloud storage devices using popular
cloud storage services
 Includes coverage of the legal implications of cloud storage forensic investigations
 Discussion of the future evolution of cloud storage and its impact on digital forensics
Authors: -K. R. Choo, M. Herman, M. Iorga, and B. Martini
Paper: -Cloud forensics: State-of-the-art and future directions,” Digital Investigation,
vol. 18, pp. 77–78, 2016.
 Cloud computing and digital forensics are emerging fields of technology.
 Unlike traditional digital forensics where the target environment can be almost
completely acquired, isolated and can be under the investigators control; in cloud
environments, the distribution of computation and storage poses unique and complex
challenges to the investigators.
 Recently, the term "cloud forensics" has an increasing presence in the field of digital
forensics. In this state-of-the-art review, we included the most recent research efforts that
used "cloud forensics" as a keyword and then classify the literature in to three
dimensions, (1) survey-based, (2) technology-based and (3) forensics procedural-based.
 We discuss widely accepted international standard bodies and their efforts to cope with
the current trend of cloud forensics.
 Our aim is not only to reference related work based on the discussed dimensions, but also
to analyze them and generate a mind map that will help in identifying research gaps.
Finally, we summarize existing digital forensics tools and, the available simulation
environments that can be used for evidence acquisition, examination and cloud forensics
test purposes.
Authors: - D. Quick and K. R. Choo,
Paper: - Google drive: Forensic analysis of dataremnants,” J. Network and Computer
Applications, vol. 40, pp. 179–193, 2014
 File storage and synchronization cloud services receives great response from internet
users and these services offers features through which users can store files on cloud, sync
those data with pc / mobile device or share all kind of files or personal data publically or
with specific person.
 Recently there are cases reported where cyber criminals used and/or targeted such cloud
services to commit malicious activities such as identity theft, privacy issues, malware,
sexual harassment and cyber terrorism etc.
 The retrieval of evidences from cloud storage services such as Google Drive, DropBox
and OneDrive etc., have been identified as an emerging challenges for digital forensic
researchers and examiners.
 There is a need for a sound digital forensic knowledge relating to the forensic analysis of
cloud storage services to identify potential digital evidences. Google Drive is a popular
cloud storage service and in this research paper,
 I did detail study of artifact left behind by Google Drive using Registry changes while
installation or un-installation process; File system analysis while login or logout process,
uploading, downloading or deletion of files; Analysis of Log and memory analysis to
identify artifacts of forensic interest.
 During the research, the hash of the extracted data/artifacts on the cloud is checked with
the original data/artifacts to establish the integrity.
 Timestamp information may be a crucial aspect of an investigation and therefore it is
important to record the information available, and to understand the circumstances
relating to a timestamp on a file
Authors: - L. Cheung and C. C. Newport,
Paper:- “Provably secure ciphertext policyABE,” in Proceedings of the 2007 ACM Conference
on Computerand Communications Security, CCS 2007, Alexandria, Virginia, USA,October 28-
31, 2007. ACM, 2007, pp. 456–465.
 In ciphertext policy attribute-based encryption (CP-ABE), every secret key is associated
with a set of attributes, and every ciphertext is associated with an access structure on
attributes.
 Decryption is enabled if and only if the user’s attribute set satisfies the ciphertext access
structure.
 This provides fine-grained access control on shared data in many practical settings,
including secure databases and secure multicast.
 In this paper, we study CP-ABE schemes in which access structures are AND gates on
positive and negative attributes. Our basic scheme is proven to be chosen plaintext (CPA)
secure under the decisional bilinear Diffie-Hellman (DBDH) assumption.
 We then apply the Canetti-HaleviKatz technique to obtain a chosen ciphertext (CCA)
secure extension using one-time signatures.
 The security proof is a reduction to the DBDH assumption and the strong existential
unforgeability of the signature primitive.
 In addition, we introduce hierarchical attributes to optimize our basic scheme reducing
both ciphertext size and encryption/decryption time while maintaining CPA security.
 Finally, we propose an extension in which access policies are arbitrary threshold trees,
and we conclude with a discussion of practical applications of CP-ABE
 In this paper we present several related CP-ABE schemes.
 The basic scheme allows an encryptor to use any AND gate on positive and negative
attributes as an access policy on the ciphertext.
 This scheme is proven to be CPA secure under the DBDH assumption. To obtain CCA
security, we extend the basic scheme with strongly existentially unforgeable onetime
signatures.
 We also present a variant with substantially smaller ciphertexts and faster
encryption/decryption operations.
 The main idea is to form a hierarchy of attributes, so that fewer group elements are
needed to represent all attributes in the system.
 This efficient variant is proven to be CPA secure. We believe our CCA secure scheme
can be optimized in a similar way
Authors:-Y. Rouselakis and B. Waters,
“Practical constructions and newproof methods for large universe attribute-based encryption,”
in2013 ACM SIGSAC Conference on Computer and CommunicationsSecurity, CCS’13, Berlin,
Germany, November 4-8, 2013. ACM, 2013,pp. 463–474.
 We propose two large universe Attribute-Based Encryption constructions.
 In a large universe ABE construction any string can be used as an attribute and attributes
need not be enumerated at system setup.
 Our first construction establishes a novel large universe Ciphertext-Policy ABE scheme
on prime order bilinear groups, while the second achieves a significant efficiency
improvement over the large universe Key-Policy ABE systems of Lewko-Waters and
Lewko.
 Both schemes are selectively secure in the standard model under two “q-type”
assumptions similar to ones used in prior works.
 Our work brings back “program and cancel” techniques to this problem.
 We provide implementations and benchmarks of our constructions in Charm; a
programming environment for rapid prototyping of cryptographic primitives
 In this section we present our large universe KP-ABE scheme. We mention here that it
can be converted to an HIBE scheme using non repeating identities, “AND” policies and
delegation capabilities (c.f. [25]).
 The intuition behind the functionality of this construction is simpler than the CP-ABE. In
this setting the public parameters consist of the five terms (g, u, h, w, e(g, g) α).
 There is one term less due to the fact that now the master secret key α is the secret to be
shared during all the key generation calls.
 As a result the “secret sharing layer” uses the g term only and the w term is used to
“bind” this layer to the u, h “attribute layer”.
SYSTEM ANALYSIS

What is Waterfall Model?


Waterfall Model is a sequential model that divides software development into different phases.
Each phase is designed for performing specific activity during SDLC phase. It was introduced in
1970 by Winston Royce.
Requirements:
The first phase involves understanding what needs to design and what is its function, purpose,
etc. Here, the specifications of the input and output or the final product are studied and marked.
System Design:
The requirement specifications from the first phase are studied in this phase and system design is
prepared. System Design helps in specifying hardware and system requirements and also helps in
defining overall system architecture. The software code to be written in the next stage is created
now.
Implementation:
With inputs from system design, the system is first developed in small programs called units,
which are integrated into the next phase. Each unit is developed and tested for its functionality
which is referred to as Unit Testing.
Integration and Testing:
All the units developed in the implementation phase are integrated into a system after testing of
each unit. The software designed, needs to go through constant software testing to find out if
there are any flaws or errors. Testing is done so that the client does not face any problem during
the installation of the software.
Deployment of System:
Once the functional and non-functional testing is done, the product is deployed in the customer
environment or released into the market.
Maintenance: This step occurs after installation, and involves making modifications to the
system or an individual component to alter attributes or improve performance. These
modifications arise either due to change requests initiated by the customer, or defects uncovered
during live use of the system. The client is provided with regular maintenance and support for
the developed software.
FUNCTIONAL REQUIREMENTS
 Home
 Data Owner
 Data consumer
 Cloud Server
 Central authority
 Data User
 Cloud Provider
 Storage Details
 Profile
 File Access
NON-FUNCTIONAL REQUIREMENTS
What is Non-Functional Requirement?
NON-FUNCTIONAL REQUIREMENT (NFR) specifies the quality attribute of a software
system. They judge the software system based on Responsiveness, Usability, Security,
Portability and other non-functional standards that are critical to the success of the software
system. Example of nonfunctional requirement, “how fast does the website load?” Failing to
meet non-functional requirements can result in systems that fail to satisfy user needs.

Non-functional Requirements allows you to impose constraints or restrictions on the design of


the system across the various agile backlogs. Example, the site should load in 3 seconds when
the number of simultaneous users are > 10000. Description of non-functional requirements is just
as critical as a functional requirement.

 Usability requirement
 Serviceability requirement
 Manageability requirement
 Recoverability requirement
 Security requirement
 Data Integrity requirement
 Capacity requirement
 Availability requirement
 Scalability requirement
 Interoperability requirement
 Reliability requirement
 Maintainability requirement
 Regulatory requirement
 Environmental requirement
Examples of Non-functional requirements
Here, are some examples of non-functional requirement:
1. Users must change the initially assigned login password immediately after the first
successful login. Moreover, the initial should never be reused.
2. Employees never allowed to update their salary information. Such attempt should be
reported to the security administrator.
3. Every unsuccessful attempt by a user to access an item of data shall be recorded on an
audit trail.
4. A website should be capable enough to handle 20 million users with affecting its
performance
5. The software should be portable. So moving from one OS to other OS does not create any
problem.
6. Privacy of information, the export of restricted technologies, intellectual property rights,
etc. should be audited.
Advantages of Non-Functional Requirement
Benefits/pros of Non-functional testing are:
 The nonfunctional requirements ensure the software system follow legal and compliance
rules.
 They ensure the reliability, availability, and performance of the software system
 They ensure good user experience and ease of operating the software.
 They help in formulating security policy of the software system.
Disadvantages of Non-functional requirement
Cons/drawbacks of Non-function requirement are:
 None functional requirement may affect the various high-level software subsystem
 They require special consideration during the software architecture/high-level design
phase which increases costs.
 Their implementation does not usually map to the specific software sub-system,
 It is tough to modify non-functional once you pass the architecture phase.
KEY LEARNING
 A non-functional requirement defines the performance attribute of a software system.
 Types of Non-functional requirement are Scalability Capacity, Availability, Reliability,
Recoverability, Data Integrity, etc.
 Example of Non Functional Requirement is Employees never allowed to update their
salary information. Such attempt should be reported to the security administrator.
 Functional Requirement is a verb while Non-Functional Requirement is an attribute
 The advantage of Non-functional requirement is that it helps you to ensure good user
experience and ease of operating the software
 The biggest disadvantage of Non-functional requirement is that it may affect the various
high-level software subsystems.
IMPLEMENTATION
System Architecture:

Modules:
Data owner:
The data owner who is outsources his encrypted data to the cloud and controls who can search
his outsourced encrypted data.
Data User:
The authorized user whose attributes satisfy the access structure of the keyword ciphertext and
who thus can retrieve the data owner’s outsourced data.
Cloud Server:
The Cloud server who provides storage and computation services, such as storing the encrypted
data and searching for the encrypted data on behalf of authorized users.
Trust Authority:
In this module, the trust authority will generate the private keys to all users who were registered
in the system.
About Project Software’s
JAVA, Apache Server, MSQL, EDIT ++
In our web Application Development we are using one tier architecture as total applicant will be
developed in single system with all the three layers of application development like presentation
layer where we use our web technologies to make of GUI of the application like HTML, HTML-
5, CSS, JS Etc. and in second layer we have to make our business logic or called as
implementation of application where we are using java, J2EE and also we use JDBC to connect
from our Business layer to data base layer and final our data base layer where we develop the
Data structure of the application

Fig:-3 Single tire Architecture Project Development


How we used java in our Project Development
Installation and Setup in our system
The Software we download form the oracle website as it’s an open source as per the software we
have installed it in our system and for we have set the system path of java in our OS location We
have used the main logic of our algorithm by core java concepts only for web application we
have used all JSP concepts and to connect data base we have used JDBC with all this concept we
have done the application in Single tire Architecture Project
Data Storage in MYSQL
We have taken open source software MYSQL from the provide website and run in our system
we used for creating our project data base related tables as per project requirement’s even for
user friendly access of my sql we used Software called SQL Yog where we can do all the
operation of mysql by click & use
About the role of apace tomcat webserver
As our project is a web applicant we need webserver so for that we used again open sour
software where our total project source code will be in webapps of the server form that location
the application run into web browser where users can see the implementation of the total project

Application Development Structure


SAMPLE CODE
SYSTEM DESIGN
UML DIAGRAMS
The System Design Document describes the system requirements, operating environment,
system and subsystem architecture, files and database design, input formats, output layouts,
human-machine interfaces, detailed design, processing logic, and external interfaces.
Global Use Case Diagrams:
Identification of actors:
Actor: Actor represents the role a user plays with respect to the system. An actor interacts with,
but has no control over the use cases.
Graphical representation:

Actor <<Actor name>>

An actor is someone or something that:


Interacts with or uses the system.
 Provides input to and receives information from the system.
 Is external to the system and has no control over the use cases.
Actors are discovered by examining:
 Who directly uses the system?
 Who is responsible for maintaining the system?
 External hardware used by the system.
 Other systems that need to interact with the system.
Questions to identify actors:
 Who is using the system? Or, who is affected by the system? Or, which
groups need help from the system to perform a task?
 Who affects the system? Or, which user groups are needed by the system
to perform its functions? These functions can be both main functions and
secondary functions such as administration.
 Which external hardware or systems (if any) use the system to perform
tasks?

 What problems does this application solve (that is, for whom)?

 And, finally, how do users use the system (use case)? What are they doing
with the system?
The actors identified in this system are:
a. System Administrator
b. Customer
c. Customer Care
Identification of usecases:
Usecase: A use case can be described as a specific way of using the system from a user’s
(actor’s) perspective.
Graphical representation:

A more detailed description might characterize a use case as:


 Pattern of behavior the system exhibits
 A sequence of related transactions performed by an actor and the system
 Delivering something of value to the actor
Use cases provide a means to:
 capture system requirements
 communicate with the end users and domain experts
 test the system
Use cases are best discovered by examining the actors and defining what the actor will be
able to do with the system.
Guide lines for identifying use cases:
 For each actor, find the tasks and functions that the actor should be able to perform or
that the system needs the actor to perform. The use case should represent a course of
events that leads to clear goal
 Name the use cases.
 Describe the use cases briefly by applying terms with which the user is familiar.
This makes the description less ambiguous
Questions to identify use cases:
 What are the tasks of each actor?
 Will any actor create, store, change, remove or read information in the system?
 What use case will store, change, remove or read this information?
 Will any actor need to inform the system about sudden external changes?
 Does any actor need to inform about certain occurrences in the system?

 What usecases will support and maintains the system?


Flow of Events
A flow of events is a sequence of transactions (or events) performed by the system. They
typically contain very detailed information, written in terms of what the system should do, not
how the system accomplishes the task. Flow of events are created as separate files or documents
in your favorite text editor and then attached or linked to a use case using the Files tab of a model
element.
A flow of events should include:
 When and how the use case starts and ends
 Use case/actor interactions

 Data needed by the use case


 Normal sequence of events for the use case
 Alternate or exceptional flows
Construction of Usecase diagrams:
Use-case diagrams graphically depict system behavior (use cases). These diagrams present a
high level view of how the system is used as viewed from an outsider’s (actor’s) perspective. A
use-case diagram may depict all or some of the use cases of a system.
A use-case diagram can contain:
 actors ("things" outside the system)
 use cases (system boundaries identifying what the system should do)
 Interactions or relationships between actors and use cases in the system including the
associations, dependencies, and generalizations.
Relationships in use cases:
1. Communication:
The communication relationship of an actor in a usecase is shown by connecting the actor
symbol to the usecase symbol with a solid path. The actor is said to communicate with the
usecase.
2. Uses:
A Uses relationship between the usecases is shown by generalization arrow from the usecase.
3. Extends:
The extend relationship is used when we have one usecase that is similar to another usecase but
does a bit more. In essence it is like subclass.
SEQUENCE DIAGRAMS
A sequence diagram is a graphical view of a scenario that shows object interaction in a time-
based sequence what happens first, what happens next. Sequence diagrams establish the roles of
objects and help provide essential information to determine class responsibilities and interfaces.
There are two main differences between sequence and collaboration diagrams: sequence
diagrams show time-based object interaction while collaboration diagrams show how objects
associate with each other. A sequence diagram has two dimensions: typically, vertical placement
represents time and horizontal placement represents different objects.
Object:
An object has state, behavior, and identity. The structure and behavior of similar objects are
defined in their common class. Each object in a diagram indicates some instance of a class. An
object that is not named is referred to as a class instance.
The object icon is similar to a class icon except that the name is underlined:
An object's concurrency is defined by the concurrency of its class.
Message:
A message is the communication carried between two objects that trigger an event. A message
carries information from the source focus of control to the destination focus of control. The
synchronization of a message can be modified through the message specification.
Synchronization means a message where the sending object pauses to wait for results.
Link:
A link should exist between two objects, including class utilities, only if there is a relationship
between their corresponding classes. The existence of a relationship between two classes
symbolizes a path of communication between instances of the classes: one object may send
messages to another. The link is depicted as a straight line between objects or objects and class
instances in a collaboration diagram. If an object links to itself, use the loop version of the icon.
CLASS DIAGRAM:
Identification of analysis classes:
A class is a set of objects that share a common structure and common behavior (the same
attributes, operations, relationships and semantics). A class is an abstraction of real-world items.
There are 4 approaches for identifying classes:
a. Noun phrase approach:
b. Common class pattern approach.
c. Use case Driven Sequence or Collaboration approach.
d. Classes , Responsibilities and collaborators Approach
1. Noun Phrase Approach:
The guidelines for identifying the classes:
 Look for nouns and noun phrases in the usecases.
 Some classes are implicit or taken from general knowledge.
 All classes must make sense in the application domain; Avoid computer
implementation classes – defer them to the design stage.
 Carefully choose and define the class names After identifying the classes we have
to eliminate the following types of classes:
 Adjective classes.
2. Common class pattern approach:
The following are the patterns for finding the candidate classes:
 Concept class.
 Events class.

 Organization class

 Peoples class

 Places class

 Tangible things and devices class.


3. Use case driven approach:
We have to draw the sequence diagram or collaboration diagram. If there is need for
some classes to represent some functionality then add new classes which perform those
functionalities.
4. CRC approach:
The process consists of the following steps:
 Identify classes’ responsibilities ( and identify the classes )
 Assign the responsibilities
 Identify the collaborators.
Identification of responsibilities of each class:
The questions that should be answered to identify the attributes and methods of a class
respectively are:
a. What information about an object should we keep track of?
b. What services must a class provide?
Identification of relationships among the classes:
Three types of relationships among the objects are:
Association: How objects are associated?
Super-sub structure: How are objects organized into super classes and sub classes?
Aggregation: What is the composition of the complex classes?
Association:
The questions that will help us to identify the associations are:
a. Is the class capable of fulfilling the required task by itself?
b. If not, what does it need?
c. From what other classes can it acquire what it needs?
Guidelines for identifying the tentative associations:
 A dependency between two or more classes may be an association. Association often
corresponds to a verb or prepositional phrase.
 A reference from one class to another is an association. Some associations are implicit or
taken from general knowledge.
Some common association patterns are:
Location association like part of, next to, contained in…..
Communication association like talk to, order to ……
We have to eliminate the unnecessary association like implementation associations, ternary or n-
ary associations and derived associations.
Super-sub class relationships:
Super-sub class hierarchy is a relationship between classes where one class is the parent class of
another class (derived class).This is based on inheritance.
Guidelines for identifying the super-sub relationship, a generalization are
1. Top-down:
Look for noun phrases composed of various adjectives in a class name. Avoid excessive
refinement. Specialize only when the sub classes have significant behavior.
2.Bottom-up:
Look for classes with similar attributes or methods. Group them by moving the common
attributes and methods to an abstract class. You may have to alter the definitions a bit.
3.Reusability:
Move the attributes and methods as high as possible in the hierarchy.
4. Multiple inheritances:
Avoid excessive use of multiple inheritances. One way of getting benefits of multiple
inheritances is to inherit from the most appropriate class and add an object of another class as an
attribute.
Aggregation or a-part-of relationship:
It represents the situation where a class consists of several component classes. A class
that is composed of other classes doesn’t behave like its parts. It behaves very difficultly. The
major properties of this relationship are transitivity and anti symmetry.
The questions whose answers will determine the distinction between the part and whole
relationships are:
 Does the part class belong to the problem domain?
 Is the part class within the system’s responsibilities?

 Does the part class capture more than a single value?( If not then simply include it
as an attribute of the whole class)

 Does it provide a useful abstraction in dealing with the problem domain?


There are three types of aggregation relationships. They are:
Assembly:
It is constructed from its parts and an assembly-partsituation physically exists.
Container:
A physical whole encompasses but is not constructed from physical parts.
Collection member:
A conceptual whole encompasses parts that may be physical or conceptual. The container and
collection are represented by hollow diamonds but composition is represented by solid diamond.
DESIGN AND ANALYSIS:

This system was designed by java web application with MySQL database interface. With help of
the html, css languages we will design presentation logic like registratation forms, login forms
and upload reports, etc. user interface components for accessing by all the users. For analysis of
business logic we are using java server page scripting language to implement the database
connectivity and storing the documents in database server.

Use case:
Registering

Patient
Login
TA

Upload Reports

Encrypted Keywords

Patient Request

Doctors

Key Generator CS

Trapdoor

Logout

Sequence Diagram:
Login Upload Reports Encrypted Keywords Logout
Registering

: Patient

1 : Fill up the registeration form()

2 : Enter valid login credentials()

3 : Encrypted the patient reports()

4 : Generate encrypted keywords()

5 : Exit from the system()

Registering Login Patient Request Object4

: Doctor

1 : Enter registeration details()

2 : Enter valid loogin credentials()

3 : Accessing patient reports()

4 : Logout from system()


Login Key Generator Trapdoor Logout

: TA : CS

1 : Enter login crdentials()

2 : Generate keys for users()

3 : Exit from system()

4 : Login with valid credentials()

5 : Perform the trapdoor search for encrypted documents()

6 : Logout from application()

Class Diagram:
Activity:

Registering

Login
Invalid credentials

Login checking verification

valid

Patient Cloud Server


Doctor
TA

Upload Reports Searching Trapdoor

Keys Generators

View Patient Reports Sending Search Results


Encrypted Keywords

Logout
Deployment Diagram
Data Flow Diagrams
A data flow diagram is graphical tool used to describe and analyze movement of data through a
system. These are the central tool and the basis from which the other components are developed.
The transformation of data from input to output, through processed, may be described logically
and independently of physical components associated with the system. These are known as the
logical data flow diagrams. The physical data flow diagrams show the actual implements and
movement of data between people, departments and workstations. A full description of a system
actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Game
and Sarson notation develops the data flow diagrams. Each component in a DFD is labeled with
a descriptive name. Process is further identified with a number that will be used for
identification purpose. The development of DFD’S is done in several levels. Each process in
lower level diagrams can be broken down into a more detailed DFD in the next level. The lop-
level diagram is often called context diagram. It consists a single process bit, which plays vital
role in studying the current system. The process in the context level diagram is exploded into
other process at the first level DFD.
The idea behind the explosion of a process into more process is that understanding at one
level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to understand
the process.
Larry Constantine first developed the DFD as a way of expressing system requirements
in a graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in system design.
So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of
bubbles joined by data flows in the system.
DFD Symbols:
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information flows
3. A circle or a bubble represents a process that transforms incoming data flow into outgoing
data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store
Constructing a DFD:

Several rules of thumb are used in drawing DFD’S:

Process should be named and numbered for an easy reference. Each name should be
representative of the process. The direction of flow is from top to bottom and from left to
right. Data traditionally flow from source to the destination although they may flow back to
the source. One way to indicate this is to draw long flow line back to a source. An
alternative way is to repeat the source symbol as a destination. Since it is used more than
once in the DFD it is marked with a short diagonal. When a process is exploded into lower
level details, they are numbered. The names of data stores and destinations are written in
capital letters. Process and dataflow names have the first letter of each work capitalized. A
DFD typically shows the minimum contents of data store. Each data store should contain all
the data elements that flow in and out. Questionnaires should contain all the data elements
that flow in and out. Missing interfaces redundancies and like is then accounted for often
through interviews.

Silent Feature of DFD’s

1. The DFD shows flow of data, not of control loops and decision are controlled considerations
do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the dataflow take
place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.
Data Flow:

1) A Data Flow has only one direction of flow between symbols. It may flow in both directions
between a process and a data store to show a read before an update. The later is usually
indicated however by two separate arrows since these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or more different
processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be at least one
other process that handles the data flow produce some other data flow returns the original
data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use. A data flow has a noun phrase label
more than one data flow noun phrase can appear on a single arrow as long as all of the flows
on the same arrow move together as one package.
DFD:
DATA BASE
ABOUT MYSQL:
MySQL is a relational database management system (RDBMS)] that runs as a server providing
multi-user access to a number of databases.  The SQL phrase stands for Structured Query
Language.Free-software-open source projects that require a full-featured database management
system often use MySQL. For commercial use, several paid editions are available, and offer
additional functionality. Applications which use MySQL databases include: TYPO3,
Joomla, WordPress, phpBB, Drupal and other software built on the LAMP software stack.
MySQL is also used in many high-profile, large-scale World Wide Web products, including
Wikipedia, Google  , Facebook, and Twitter.
MySQL is the world's most popular open source database software, with over 100 million copies
of its software downloaded or distributed throughout it's history. With its superior speed,
reliability, and ease of use, MySQL has become the preferred choice for Web, Web 2.0, SaaS,
ISV, Telecom companies and forward-thinking corporate IT Managers because it eliminates the
major problems associated with downtime, maintenance and administration for modern, online
applications.
Many of the world's largest and fastest-growing organizations use MySQL to save time and
money powering their high-volume Web sites, critical business systems, and packaged software
including industry leaders such as Yahoo!, Alcatel-Lucent, Google, Nokia, YouTube, Wikipedia,
and Booking.com.
The flagship MySQL offering is MySQL Enterprise, a comprehensive set of production-tested
software, proactive monitoring tools, and premium support services available in an affordable
annual subscription.
MySQL is a key part of LAMP (Linux, Apache, MySQL, PHP / Perl / Python), the fast-growing
open source enterprise software stack. More and more companies are using LAMP as an
alternative to expensive proprietary software stacks because of its lower cost and freedom from
platform lock-in.
MySQL was originally founded and developed in Sweden by two Swedes and a Finn: David
Axmark, Allan Larsson and Michael "Monty" Widenius, who had worked together since the
1980's. More historical information on MySQL is
DATA BASE TABLES
SOFTWARE TESTING
Software testing is one of the main stages of project development life cycle to provide our
cessation utilizer with information about the quality of the application and ours, in our Project we
have under gone some stages of testing like unit testing where it’s done in development stage of
the project when we are in implementation of the application after the Project is yare we have
done manual testing with different Case of all the different modules in the application we have
even done browser compatibility testing in different web browsers in market, even we have done
Client side validation testing on our application
Unit testing
The unit testing is done in the stage of implementation of the project only the error are solved in
development stage some of the error we come across in development are given below
TESTING
Testing is the debugging program is one of the most critical aspects of the computer
programming triggers, without programming that works, the system would never produce an
output of which it was designed. Testing is best performed when user development is asked to
assist in identifying all errors and bugs. The sample data are used for testing. It is not quantity
but quality of the data used the matters of testing. Testing is aimed at ensuring that the system
was accurately an efficiently before live operation commands.
Testing objectives:
The main objective of testing is to uncover a host of errors, systematically and with minimum
effort and time. Stating formally, we can say, testing is a process of executing a program with
intent of finding an error.
A successful test is one that uncovers an as yet undiscovered error.
A good test case is one that has probability of finding an error, if it exists.
The test is inadequate to detect possibly present errors.
The software more or less confirms to the quality and reliable standards.
Levels of Testing:
In order to uncover present in different phases we have the concept of levels of testing.
The basic levels of Testing:

Client needs acceptance testing

Requirements system testing

Design integration testing

Code unit testing

Figure: Levels of Testing


Code testing:
This examines the logic of the program. For example, the logic for updating various sample data
and with the sample files and directories were tested and verified.
Specification Testing:
Executing this specification starting what the program should do and how it should performed
under various conditions. Test cases for various situation and combination of conditions in all the
modules are tested.
Unit testing:
In the unit testing we test each module individually and integrate with the overall system. Unit
testing focuses verification efforts on the smallest unit of software design in the module. This is
also known as module testing. The module of the system is tested separately. This testing is
carried out during programming stage itself. In the testing step each module is found to work
satisfactorily as regard to expected output from the module. There are some validation checks for
fields also. For example the validation check is done for varying the user input given by the user
which validity of the data entered. It is very easy to find error debut the system.
Each Module can be tested using the following two Strategies:
1. Black Box Testing
2. White Box Testing
BLACK BOX TESTING
What is Black Box Testing?
Black box testing is a software testing techniques in which functionality of the software under
test (SUT) is tested without looking at the internal code structure, implementation details and
knowledge of internal paths of the software. This type of testing is based entirely on the software
requirements and specifications.
In Black Box Testing we just focus on inputs and output of the software system without
bothering about internal knowledge of the software program.

                        
The above Black Box can be any software system you want to test. For example : an operating
system like Windows, a website like Google ,a database like Oracle or even your own custom
application. Under Black Box Testing, you can test these applications by just focusing on the
inputs and outputs without knowing their internal code implementation.

Black box testing - Steps


Here are the generic steps followed to carry out any type of Black Box Testing.
 Initially requirements and specifications of the system are examined.
 Tester chooses valid inputs (positive test scenario) to check whether SUT processes them
correctly. Also some invalid inputs (negative test scenario) are chosen to verify that the
SUT is able to detect them.
 Tester determines expected outputs for all those inputs.
 Software tester constructs test cases with the selected inputs.
 The test cases are executed.
 Software tester compares the actual outputs with the expected outputs.
 Defects if any are fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but following are the prominent ones -
 Functional testing – This black box testing type is related to functional requirements of a
system; it is done by software testers.
 Non-functional testing – This type of black box testing is not related to testing of a
specific functionality, but non-functional requirements  such as performance, scalability,
usability.
 Regression testing – Regression testing is done  after code fixes , upgrades or any other
system maintenance to check the new code has not affected the existing code.
WHITE BOX TESTING
White Box Testing is the testing of a software solution's internal coding and infrastructure.It
focuses primarily on strengthening security, the flow of inputs and outputs through the
application, and improving design and usability.White box testing is also known as clear, open,
structural, and glass box testing.
It is one of two parts of the "box testing" approach of software testing. Its counter-part,
blackbox testing, involves testing from an external or end-user type perspective. On the other
hand, Whitebox testing is based on the inner workings of an application and revolves around
internal testing. The term "whitebox" was used because of the see-through box concept. The
clear box or whitebox name symbolizes the ability to see through the software's outer shell (or
"box") into its inner workings. Likewise, the "black box" in "black box testing" symbolizes not
being able to see the inner workings of the software so that only the end-user experience can be
tested
What do you verify in White Box Testing ?
White box testing involves the testing of the software code for the following:
 Internal security holes
 Broken or poorly structured paths in the coding processes
 The flow of specific inputs through the code
 Expected output
 The functionality of conditional loops
 Testing of each statement, object and function on an individual basis

The testing can be done at system, integration and unit levels of software development. One of
the basic goals of whitebox testing is to verify a working flow for an application. It involves
testing a series of predefined inputs against expected or desired outputs so that when a specific
input does not result in the expected output, you have encountered a bug.
How do you perform White Box Testing?
To give you a simplified explanation of white box testing, we have divided it into two basic
steps. This is what testers do when testing an application using the white box testing technique:
STEP 1) UNDERSTAND THE SOURCE CODE
The first thing a tester will often do is learn and understand the source code of the application.
Since white box testing involves the testing of the inner workings of an application, the tester
must be very knowledgeable in the programming languages used in the applications they are
testing. Also, the testing person must be highly aware of secure coding practices. Security is
often one of the primary objectives of testing software. The tester should be able to find security
issues and prevent attacks from hackers and naive users who might inject malicious code into the
application either knowingly or unknowingly.
Step 2) CREATE TEST CASES AND EXECUTE
The second basic step to white box testing involves testing the application’s source code for
proper flow and structure. One way is by writing more code to test the application’s source code.
The tester will develop little tests for each process or series of processes in the application. This
method requires that the tester must have intimate knowledge of the code and is often done by
the developer. Other methods include manual testing, trial and error testing and the use of testing
tools as we will explain further on in this article.
System testing:
Once the individual module testing is completed, modules are assembled and integrated to
perform as a system. The top down testing, which began from upper level to lower level module,
was carried out to check whether the entire system is performing satisfactorily.
There are three main kinds of System testing:
i. Alpha Testing
ii. Beta Testing
iii. Acceptance Testing
Alpha Testing:
This refers to the system testing that is carried out by the test team with the Organization.
Beta Testing:
This refers to the system testing that is performed by a selected group of friendly customers
Acceptance Testing:
This refers to the system testing that is performed by the customer to determine whether or not to
accept the delivery of the system.
Integration Testing:
Data can be lost across an interface, one module can have an adverse effort on the other sub
functions, when combined, may not produce the desired major functions. Integrated testing is the
systematic testing for constructing the uncover errors within the interface. The testing was done
with sample data. The developed system has run successfully for this sample data. The need for
integrated test is to find the overall system performance.
Output testing: After performance of the validation testing, the next step is output testing. The
output displayed or generated by the system under consideration is tested by asking the user
about the format required by system.
Software testing is one of the main stages of project development life cycle to provide our
cessation utilizer with information about the quality of the application and ours, in our Project we
have under gone some stages of testing like unit testing where it’s done in development stage of
the project when we are in implementation of the application after the Project is yare we have
done manual testing with different Case of all the different modules in the application we have
even done browser compatibility testing in different web browsers in market, even we have done
Client side validation testing on our application
Unit testing
The unit testing is done in the stage of implementation of the project only the error are solved in
development stage some of the error we come across in development are given below
Testing done when application is in development stage
Class version Error in our application

This Error Come when we move our application from one system to other and mainly when we
version issues in the software’s we us
Path related error in our application
This Error Came when I have Performance Metrics to show in graph when I missed my server
directly path in the system so we got this error in the applicant in development stage
Server Connection Error
Manual testing on project application
TEST CASES
Test Case ID #1 Test Case Description - Validations in Registration
Form
S# Prerequisites S# Test Data Requirement
1 User should be 1 Data should be valid
Registered
Test Condition
Entering data in registration form
Step Step Details Expected Actual Results Pass/Fail/Not
# Results Executed/Suspended
1 User gives First Pop showing Enter valid Fail
and Last Name email email/password
verification
message
2 Submitting the Pop showing Enter email Fail
form without email /password
entering any verification
details message
3 User enters Pop showing Enter valid email Fail
invalid format email id
of email id verification
message
4 User enters a Pop showing Enter valid phone Fail
phone number email number
with < 10 digits verification
message
5 Entering valid Pop showing Pop showing Pass
username and email email verification
password verification message
message
Table 1 Registration test case
Test Case ID #2 Test Case Description - Validations in Login Form
S# Prerequisites S# Test Data Requirement
1 User should have an 1 Data should be valid
email id
Test Condition
Entering data in login form
Step Step Details Expected Actual Results Pass/Fail/Not
# Results Executed/Suspended
1 User gives User logged Enter valid Fail
aemail or in email/password
password of <6
characters
2 Submitting the User logged Enter email Fail
form without in /password
entering any
details
3 User enters User logged Enter correct email Fail
wrong Email in /password
and (or)
password
Table 2 Login test case
BROWSER COMPATIBILITY TESTING TO PROJECT APPLICATION
Browser Compatibility is the manner in which a web page looks in different web browsers.
Different browsers read the website code differently. In other words, Chrome will render a
website Differently than FireFox or Internet Explorer will.
Cross browser compatibility testing has been gaining a lot of traction in recent years and there is
a reason for it. While technology is evolving rapidly, people aren’t. A significant amount of
people are resistant to changes, or more specifically, “have an aversion to upgrading their tech”.
In this scenario, it’s browser compatibility testing that enables companies to ensure that no
customer is left behind or has an experience that is not desired. So even though browsers like
Google Chrome and Firefox dominate the market, there are people using their older versions, or
other browsers. And their numbers are too high to be ignored.
What is cross browser compatibility testing?
Cross browser compatibility testing is a non-functional form of testing, which emphasizes on
availing your website’s basic features and functionality to users on different browser-OS
combinations, devices, and assistive tools.
How does it impact your application?
Not all browsers and devices work on the same configuration; they face browser compatibility
issues on different levels. This inconsistency is the reason why you might observe the lack of
application uniformity across browsers and devices. You would not want a section of your
prospective users to not be able to access the application features.
That is what makes cross browser testing important. If your website is not tested and debugged
on different platforms and browsers, it won’t work the same on all of them, causing
inconvenience to the users, subsequently impacting your business.
Which browsers to choose for cross browser testing?
Since it’s impossible to test on every possible browser-device combination, you need to shortlist
the most important ones to test your web application on. As of December 2018, Google Chrome
has the largest number of users. It accounts for about 70.95% of the market. Firefox comes
second with a market share of 10.05%, while others such as IE, Safari, and Edge has a market
share of approx 4-5% each.
Result of My Application on UC browser
Result of my Project in chrome
Result of my Project in Opera
VALIDATION TESTING FOR PROJECT APPLICATION
SCREEN SHORTS
CONCLUSIONS
This paper presented a new cryptography primitive, which supports hybrid boolean keyword
search for outsourced encrypted data in attribute-based settings. Especially, the data owner can
control the search permission for his encrypted data, so that only authorized users can retrieve
the encrypted data. Additionally, every user can delegate a private key to another user with
restricted credentials. The result of the evaluation shows that the primitive is very efficient and
practical. We also analysed the security of the primitive under our security model. Detection and
Protection, Guangdong Engineering Technology Research Center of Privacy Preservation and
Data Security and the Guangdong Innovative and Entrepreneurial Research Team Program (No.
2014ZT05D238), Project of Internation as well as Hongkong, Macao&Taiwan Science and
Technology Cooperation Innovation Platform in Universities in Guangdong Province
(No.2015KGJHZ027).
REFERENCES
[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski, G. Lee, D. A.
Patterson, A. Rabkin, I. Stoica, andvM. Zaharia, “A view of cloud computing,” Commun. ACM,
vol. 53,vno. 4, pp. 50–58, 2010.
[2] P. C. Tang, J. S. Ash, D. W. Bates, J. M. Overhage, and D. Z. Sands, “White paper: Personal
health records: Definitions, benefits, and strategies for overcoming barriers to adoption,”
JAMIA, vol. 13, no. 2, pp. 121–126, 2006.
[3] Y. Liu, Y. L. Sun, J. Ryoo, S. Rizvi, and A. V. Vasilakos, “A survey of security and privacy
challenges in cloud computing: Solutions and future directions,” JCSE, vol. 9, no. 3, 2015.
[4] C. B¨osch, P. Hartel, W. Jonker, and A. Peter, “A survey of provably secure searchable
encryption,” ACM Computing Surveys, vol. 47, no. 2, pp. 18:1–18:51, 2014.
[5] R. Curtmola, J. A. Garay, S. Kamara, and R. Ostrovsky, “Searchable symmetric encryption:
improved definitions and efficient constructions,” in Proceedings of the 13th ACM Conference
on Computer and Communications Security, CCS 2006, Alexandria, VA, USA, Ioctober
30 - November 3, 2006, 2006, pp. 79–88.
[6] ——, “Searchable symmetric encryption: Improved definitions and efficient constructions,”
Journal of Computer Security, vol. 19, no. 5, pp. 895–934, 2011.
[7] C. Dong, G. Russello, and N. Dulay, “Shared and searchable encrypted data for untrusted
servers,” Journal of Computer Security, vol. 19, no. 3, pp. 367–397, 2011.
[8] S. Kamara, C. Papamanthou, and T. Roeder, “Dynamic searchable symmetric encryption,” in
the ACM Conference on Computer and Communications Security, CCS’12, Raleigh, NC, USA,
October 16-18, 2012, 2012, pp. 965–976.
[9] K. Kurosawa and Y. Ohtaki, “Uc-secure searchable symmetric encryption,” in Financial
Cryptography and Data Security - 16th International Conference, FC 2012, Kralendijk, Bonaire,
Februray 27- March 2, 2012, Revised Selected Papers, 2012, pp. 285–298.
[10] S. Kamara and C. Papamanthou, “Parallel and dynamic searchable symmetric encryption,”
in Financial Cryptography and Data Security - 17th International Conference, FC 2013,
Okinawa, Japan, April 1-5, 2013, Revised Selected Papers, 2013, pp. 258–274.

You might also like