0% found this document useful (0 votes)
39 views6 pages

Deepercoder: Code Generation Using Machine Learning: Ntroduction

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

DeeperCoder: Code Generation Using Machine

Learning

Abstract—In this paper, we present a program generation Most of the solutions are implemented in the development
system based on input and output specification. The system editors used by the developers. Like, autosuggestion in Java
is developed based on a programming-by-example technique Brains editors such as IntelliJ; code-completion in Visual
used in program synthesis. The system can generate computer
programs that satisfies user requirements based on inputs and Studio Code by Microsoft and so on. Thus, the industry is
outputs. We created a simple Domain Specific Language (DSL) working on providing recommendations on the fly while a
which will be used in program synthesis. We trained our neural developer writes the code but, what if we could provide the
network with a large set of input space and store corresponding entire solution just by looking at the input-output specification
sample training programs. To get the final output which satisfies provided by the developer. Suppose, the developer is given a
all the user specifications, we used inductive program synthesis
and machine learning. We also experimented with different list as input and is asked to develop a program to satisfy a
deep learning models to obtain the desired results with reduced given output which is again a list. Rather than a developer
number of steps and execution time. Finally, we show three writing the code, we can generate the code for the input-
layers of neural networks with LeakyReLU achieves the best output specification, and if we are not able to generate then
performance when compared to other approaches. the developer may try it. This will help in two aspects, the
Index Terms—Program Synthesis, Domain Specific Language,
developer need not have to worry about writing any code
and get any suggestions while writing it, also we can target a
more general audience and not only software developers. We
I. I NTRODUCTION explored one such field known as program synthesis. Program
Due to the digital revolution in technologies, developers synthesis helps in studying programming languages and can be
are solving much bigger and complex problems. From simple applied to implement a solution to generate programs given an
feature development tasks to the development of complex dis- input-output specification. The programming-by-examples [1]
tributed software systems, developers need to focus on various is a concept in program synthesis that helps us in generating
tasks at hand daily. Most of the time, the implementation programs given the input-output specification. Our solution
for any task revolves around solving repetitive tasks such as keeps the programming-by-examples model as its base and
sorting some data from a list or filtering results with some builds upon it to achieve the program generation.
condition and so on. While implementing these tasks, they Generalization is one of the major problems in program
may lose the context of the logic required to solve the actual synthesis. There exist many high-level programming languages
problem further resulting in a waste of time and effort. We today like Java, C++ and so on. These high-level program-
want to target this problem and provide a simple solution that ming languages having advanced compilers to execute the
will help the developers in implementing common repetitive programs, also have a lot of library functions. So, if we
tasks like sorting and filtering mentioned above further helping want to generate any program in these languages there are
them to focus on the actual problem. This will also help them largely many permutations and combinations possible to pick
increase their productivity at work. n number of functions that will generate a program. The
Various solutions are currently being implemented in the time taken to find and execute our solution to generate a
industry that solves some part of the above-discussed problem. program in these languages will also be significantly large.

0194

Authorized licensed use limited to: Auckland University of Technology. Downloaded on May 26,2020 at 21:35:02 UTC from IEEE Xplore. Restrictions apply.
This time will also exponentially grow when the length of the structural constraint. The search strategy consists of searching
program you want to generate increases. Further, this problem for a combination of functions and then rank generated pro-
becomes more complex as the syntax greatly varies across grams, to find out the most suitable code. Enumerative search
these languages. Thus, it is difficult, and we cannot develop a technique includes a divide-and-conquer approach [5]. By
generic as well as an efficient solution to our problem using separately enumerating smaller expressions that are correct on
any high-level languages. To tackle this problem, researchers subsets of inputs and predicates that distinguish these subsets.
in program synthesis have come up with designing a use- These expressions and predicates are then combined using a
case specific language specialized to work for your use case. decision tree to find out the expression that is correct on all
This language is termed as a domain-specific language [2]. the input examples.
Microsoft has developed various domain-specific languages SQLizer contains a program synthesis technique that gen-
for text transformations, JSON transformation and likewise in erates automatic SQL queries for provided natural language
their Microsoft Excel product [1]. Hence, to solve our problem input [6]. It involves a Natural Language-based program syn-
we can generate a program in our domain-specific language thesis methodology that combines semantic parsing techniques
that will satisfy the given input-output example. with automated program repair. The parsing technique involves
Our goal was to develop a program synthesis system that type-directed program synthesis, which helps in classification.
takes an input-output example specification and produces a DirectFix, a semantics-based code repair tool, provides the
program as an output satisfying the given specification. The most straight-forward patch such that the program structure of
generated program can then be used for other similar problems the buggy program is maximally preserved [7]. An efficient
if given by the end-user. As part of the implementation, we way of localizing and diffusing code into an acceptable form
designed a 32-function domain-specific language (DSL). Our is achieved by a technique called MaxSat (Maximum Satisfi-
DSL includes functions like MAP, SORT, REVERSE, FILTER ability problem). The basis for this technique is governed by
and so on. All the functions are explained in Appendix A. component-based synthesis [8].
These functions take input as LIST or LIST and INT together.
They output either INT or LIST as output. We then imple- III. OVERVIEW OF P ROGRAM S YNTHESIS
mented a search strategy that searches for the combinations In this paper, we try to make use of existing machine
of the functions in DSL to generate a program with some learning algorithms to predict programs that satisfy input-
predefined length provided by the end-user. The searching output examples. The main idea behind the system is to predict
strategy depends on a function order dictionary. To make the the probabilities of functions ahead of time. For example, we
search efficient we developed a neural network model that are predicting the probabilities of all higher-order functions.
will give us probabilities of the functions that may appear in If we know the probabilities of these functions ahead of time,
the final generated program. Based on these probabilities we we can guide the search effectively. This can also help us to
sort and re-order the function order dictionary which results generate a program that satisfies input-output examples with
in trying better candidate functions earlier in the search and a smaller number of comparisons. At the same time, we also
provide the generated program earlier with fewer steps and keep track of the number of problems that we can solve.
better execution time.
II. R ELATED W ORKS
In DeepCoder: Learning to Write Programs [2], the authors
have constructed a program synthesis system that generates
the programs by making use of a machine learning model. It Fig. 1. Program Synthesis Example
describes the implementation details of the machine learning
model which is then applied to program synthesis to minimize Our goal is to solve a greater number of problems with
the computations. The authors proved that applying machine the minimum number of steps. Figure 1 illustrates a simple
learning techniques to program synthesis significantly reduces example of user input and the generated program. To generate
the number of steps required to generate the program and programs that meet all the requirements of user inputs it is
improves the program generation execution time. important to predict probabilities of the functions accurately.
Another modern example of program synthesis is evident To achieve this, we developed different neural network models
in Synquid [3]. Synquid is a program synthesizer based on that are trained against thousands of input-output examples.
refinement types. Refinement types are specifications that We then generate function probabilities from each of these
are understood by Synquid to generate the functional pro- models which helps us to prune our search to fewer functions.
gramming code. The three main dimensions of programming IV. P ROGRAM S YNTHESIS C OMPONENTS
synthesis are behavioral constraints, structural constraints, and
search strategy [4]. A behavioral constraint is used to define A. Domain Specific Language (DSL)
how a user interacts with the system. Structural constraints Domain-Specific Language is a high-level language de-
limit the space of the program to explore. Existing or user- veloped towards achieving a specific goal [2]. It provides
defined Domain Specific Language (DSL) is used to define the appropriate notations and functions which are restricted to a

0195

Authorized licensed use limited to: Auckland University of Technology. Downloaded on May 26,2020 at 21:35:02 UTC from IEEE Xplore. Restrictions apply.
specific application domain. Therefore, to apply the process of trained we test it against a separate dataset and generate the
program synthesis we need to design a custom domain-specific probability matrix for each of the DSL supported functions.
language that is suitable for our application domain. There As of now, we are supporting inputs that contain only LIST
are numerous built-in functions in higher-level languages like or LIST|INT. We plan to enhance the system to support other
Java or Python. To perform a search over the program space, complex input types like LIST|LIST or LIST|LIST|INT etc.
it would take us days to find the best program satisfying
the input-output specification even with the state-of-the-art C. Domain Specific Language Implementation
searching techniques. The DSL limits the scope of the program There are 32 functions that we support in the domain-
space and this confined scope will then be used to search for specific language. The DSL also contains the implementation
the final program. of these functions. We made use of Python lambdas to imple-
For our system, we have implemented our DSL. It includes ment these functions. It is the most important block of program
32 different functions that contain a combination of lower- synthesis, hence special care must be taken to implement these
order functions like ADD, SUBTRACT, MULTIPLY, DIVIDE functions accurately.
and higher-order functions like MAP, FILTER, and COUNT. Appendix A shows the implementation details, where we
Defined DSL also contains the implementation details of all used a dictionary that holds the key as the function names
these 32 functions that we plan to use. and the values as the corresponding python lambda implemen-
tations. Based on the input and output type these functions
B. Searching Strategy for Program Generation will get executed on the fly while searching through the
The search algorithm is a classical Depth First Search (DFS) program space. We have a few higher-order functions like
implementation using recursion and backtracking. We explored MAP, FILTER, and COUNT. Each of these functions is given
various approaches and finalized this approach as it suits our a valid input and output type, based on which the functions
use case. The search program is guided by a key variable are either picked for searching or pruned from searching.
“T”. This “T” is the length of the program or number of
functions that should be present in the program to check D. The Search Algorithm
whether it satisfies the output or not. Once the searching The search algorithm is a classical DFS implementation
program execution reaches the length “T” then we backtrack using recursion and backtracking. Below is the pseudo-code
to find other paths. Currently, we are able to generate programs on how we implemented this algorithm:
of length T = 1 to T = 4 but later we can tune the algorithm to
be generic enough to satisfy programs of length greater than Algorithm 1: Generate program that satisfies input
4. output examples
V. I MPLEMENTATION Require: G is a dictionary of input types and functions
inputT ype ← type(example[input])
A. General Idea outT ype ← type(example[output])
To train the neural network we need the dataset which con- for f in the adjacency list of G[inputT ype] do
tains various input-output examples along with their programs helperDFS(t, examples, T, f)
and it returns a probability matrix. This probability matrix is a end for
1*34 matrix that has 34 functions along with their probabili-
ties. These probabilities indicate whether a particular function
will appear in the final program. Higher probability indicates
Algorithm 2: Helper DFS Function
that the function is most likely to be present in the final
generated program. After generating the probability matrix, we Require: G is a dictionary of input types and functions
implemented the DFS searching strategy that searches in the if t == T then
descending order that is predicted by the neural network. Once validateGeneratedProgram(program,examples)
the searching is completed we return the generated program end if
as output. program ← (program + f )
outT ype ← executeCurrentF unction(f, examples)
B. Training Dataset outT ype ← type(output)
The dataset used in our system consists of 5 JSON files. The for f in the adjacency list of G[inputT ype] do
first JSON file consists of input-output examples that generate helperDFS(t, examples, T, f)
a program of length 1. The second JSON file consists of input- end for
output examples for programs of length 2 and so on. Each
object inside a JSON file consists of at least 5 different test
cases for every program. The generated program is validated E. Basic Neural Network Implementation
against all of these 5 test cases. We train the neural network The idea for basic neural network architecture is derived
on a sample dataset which contains the input-output examples from the original deep coder model implementation [2]. How-
along with the actual program [2]. Once the neural network is ever, we added a few more optimizations to see if we get

0196

Authorized licensed use limited to: Auckland University of Technology. Downloaded on May 26,2020 at 21:35:02 UTC from IEEE Xplore. Restrictions apply.
6) Model 6: This model consists of 5 layers with 256 units
each. All layers have sigmoidal as activation function
and optimizer is Nadam.
7) Model 7: This is the original deep-coder model. This
model consists of 3 layers with 256 units each. All layers
have sigmoidal as activation function and optimizer is
Adam.
8) Model 8: This model consists of 3 layers with 256 units
each. First two layers are having LeakyRelu as activation
function with alpha = 0.7 and the third layer has sigmoid
activation function. We are using Nadam as optimizer
Fig. 2. Neural Network Architecture
9) Model 9: This model consists of 3 layers with 256 units
each. First two layers are having Relu as activation
better performance. Below are the steps that we followed for function and the third layer has sigmoid activation
building the neural network. function. We are using Nadam as optimizer
1) Parsing: At first, we are converting all input and output 10) Model 10:This model consists of 3 layers with 256 units
examples into X and Y outputs to feed to the neural each. First two layers are having Relu as activation
network. function and the third layer has sigmoid activation
2) Encoding: We kept a standard integer range from -256 function. We are using Adam as optimizer
to 255. For handling the negative values (As we cannot 11) Model 11: This model consists of 3 layers with 256 units
pass negative values to a neural network) we are adding each. First two layers are having LeakyRelu as activation
255 to all the integers so that we only have positive function with alpha = 0.4 and the third layer has sigmoid
value distribution. LIST, INT and NONE input types activation function. We are using Nadam as optimizer
are encoded to a specific integer. 12) Model 12: This model consists of 3 layers with 256 units
3) Padding: Using the above encoding we create the input each. All layers have LeakyRelu as activation function
vector. If the length of this vector is less than 20 then with alpha = 0.5 and optimizer is Nadam.
we append NONE values to make it of a fixed size of 13) Model 13: This model consists of 3 layers with 256 units
length 20. each. All layers have hard sigmoid as activation function
4) Concatenation: We are then concatenating input types, and optimizer is Nadam
embedding integers and output types into a single fixed- 14) Model 14: This model consists of 3 layers with 256
length vector. units each. All layers have tanh as activation function
5) Neural Net: This vector is then passed through a series and optimizer is Nadam.
of 3 sigmoidal layers with 256 sigmoidal units each. G. Model Enhancement
Then we are pooling the last layer’s output using simple
arithmetic averaging. The optimizer that we are using is To enhance the model, we tried various advanced activation
Adam and binary cross-entropy. functions, added extra layers to the model as well as different
optimization functions. Below is the description of the param-
F. Description of each model eters we used to construct these models. The proposed system
1) Model 1: This model consists of 3 layers with 256 works for smaller programs where the final generated program
units each. First two layers are having LeakyReLU as consists of 4 function combinations. But we can extend it
activation function with alpha = 0.5 and the third layer for larger programs of length 5 or more, provided it will
has sigmoid activation function. We are using Nadam as increase the number of computations. Also the output is a
optimizer. JSON string representing the functions and the order in which
2) Model 2: This model consists of 3 layers with 256 units these functions should be applied. The final generated program
each. First two layers are having LeakyRelu as activation is not related to any programming languages hence, program
function with alpha = 0.6 and the third layer has sigmoid compilers is beyond the scope of the research work.
activation function. We are using Nadam as optimizer. 1) Adding extra layers to the original model: The idea
3) Model 3: This model consists of 5 layers with 256 units behind this model is that each layer adds its level
each. All layers have sigmoidal as activation function of non-linearity. This level of non-linearity cannot be
and optimizer is Nadam. contained in a single layer. Moreover, inputs of each
4) Model 4: This model consists of 3 layers with 256 units layer are combined linearly. Hence, they cannot match
each. All layers have sigmoidal as activation function the non-linearity across the entire network. However, the
and optimizer is Nadam. disadvantage behind introducing more layers is over-
5) Model 5: This model consists of 3 layers with 256 units fitting. Hence, we proceeded with this experiment by
each. All layers have tanh as activation function and incrementing the layers and observed the results. We
optimizer is Adam. found that by adding two extra layers to the original

0197

Authorized licensed use limited to: Auckland University of Technology. Downloaded on May 26,2020 at 21:35:02 UTC from IEEE Xplore. Restrictions apply.
model while keeping all parameters the same, the num-
ber of steps and search time reduced.
2) Using different activation functions other than the sig-
moidal function which is used in the original neural
network architecture [2].
a) Tanh: Tanh solves the problems like backpropaga-
tion of errors. It also solves the problem where all
values are identical and symmetric over the origin
[9]. Mathematical form:

tan(x) = 2 ∗ sigmoidal(2 ∗ x) − 1 (1)


Fig. 3. Number of Steps for T=3
tanh(x) = e2x + 1/e2x − 1 (2)
b) Relu: Relu is a non-linear function which is
advantageous to propagate errors during back-
propagation. [10]. Mathematical form:
f (x) = max(x, 0) (3)
c) LeakyRelu: Using small alpha when the unit is
inactive helps to solve the problem of dead neu-
rons. In our experiment, we tried different values
of alpha ranging from 0.1 to 1. After the experi-
mentation, we concluded that with alpha = 0.5, we
get best results. Mathematical form:
Fig. 4. Computation Time for T=3
alpha ∗ x, if x < 0
f (x) = (4)
x, otherwise w/o ML Base Best Efficiency
model Model Model in %
d) Hard Sigmoid: This activation function is used to T=1 19 15 14 6.67
save on computation time for faster results. Its T=2 122 109 102 6.42
shape is the same as that of sigmoidal function ex- T=3 2234 1959 1718 12.30
T=4 9011 3488 2991 14.25
cept that it is a non-smooth function. Mathematical
form: TABLE I
TOTAL STEPS COMPARISON BETWEEN VARIOUS MODELS
f (x) = max(0, min(1, (x + 1)/2)) (5)
3) Nadam optimizer: We observed that by using Nesterov-
accelerated Adaptive Moment Estimation (Nadam) over performance is better than the original deep-coder model
adaptive Moment Estimation (Adam) helped to improve
(which is model 7 in our experimentation.)
our model’s performance [11]. By summing the expo-
In Table I, we show steps required to generate a program of
nential decay of the moving averages of previous and
length T = 1, 2, 3 and 4. We observed a significant difference
current gradient Nadam speeds up the learning process.
in total steps taken to generate the program after applying
H. Experiments and Results machine learning techniques.
The graphs in Fig. 3 and Fig. 4 show the comparison of dif-
ferent model’s performance over a pool of 500 examples. The VI. C ONCLUSION
terminologies for models represented on y-axis is explained
in Appendix B. We measure how many steps it takes to solve In this paper, we proposed a program synthesis framework,
the problems and the total computation time to solve for each a paradigm to use state-of-the-art machine learning tools to
model. the problem of program generation. We successfully show the
The description of the models is provided in section V-F ability of our neural network models to predict the probability
and model terminologies represented on y-axis are explained of functions defined in DSL. To show the efficiency of
in Appendix B. As shown in Fig. 3, we can see that the our approach, we evaluated it using fifteen different neural
performance of Model 3 and Model 1 is better as compared network models under multiple settings. Results show that
to other models in the experimentation in terms of the number enabling LeakyRelu with Nadam optimization outperforms
of steps and execution time to solve the problems. This other models.

0198

Authorized licensed use limited to: Auckland University of Technology. Downloaded on May 26,2020 at 21:35:02 UTC from IEEE Xplore. Restrictions apply.
A PPENDIX A 11) Model 11: leakyRelu_nadam_point_4
D OMAIN S PECIFIC L ANGUAGE 12) Model 12: all_leakyRelu_nada_point_5
13) Model 13: hard_sigmoid_nadam
Our domain specific language (DSL) consists of 32 func-
14) Model 14: tanh_tanh_tanh_nadam
tions. These functions are explained below with name and
corresponding python lambda implementation [2]. The n is R EFERENCES
a variable representing a valid integer and xs is a variable [1] S. Gulwani, “Programming by examples: Applications, algorithms, and
representing a valid LIST: ambiguity resolution,” in International Joint Conference on Automated
Reasoning. Springer, 2016, pp. 9–14.
1) ’HEAD’: lambda xs: xs[0] if xs else None [2] M. Balog, A. L. Gaunt, M. Brockschmidt, S. Nowozin, and
2) ’LAST’: lambda xs: xs[-1] if xs else None D. Tarlow, “Deepcoder: Learning to write programs,” arXiv preprint
3) ’TAKE’: lambda n, xs: xs[:n] arXiv:1611.01989, 2016.
[3] N. Polikarpova, I. Kuraj, and A. Solar-Lezama, “Program synthesis from
4) ’DROP’: lambda n, xs: xs[n:] polymorphic refinement types,” in ACM SIGPLAN Notices, vol. 51,
5) ’ACCESS’: lambda n, xs: xs[n] if n >= 0 and len(xs) no. 6. ACM, 2016, pp. 522–538.
> n else None [4] S. Gulwani, O. Polozov, R. Singh et al., “Program synthesis,” Foun-
dations and Trends® in Programming Languages, vol. 4, no. 1-2, pp.
6) ’MINIMUM’: lambda xs: min(xs) if xs else None 1–119, 2017.
7) ’MAXIMUM’: lambda xs: max(xs) if xs else None [5] R. Alur, A. Radhakrishna, and A. Udupa, “Scaling enumerative program
8) ’REVERSE’: lambda xs: xs[::-1] synthesis via divide and conquer,” in International Conference on Tools
and Algorithms for the Construction and Analysis of Systems. Springer,
9) ’SORT’: lambda xs: sorted(xs) 2017, pp. 319–336.
10) ’SUM’: lambda xs: sum(xs) [6] N. Yaghmazadeh, Y. Wang, I. Dillig, and T. Dillig, “Sqlizer: query syn-
11) ’MAP’: lambda function, xs: [function(x) for x in xs] thesis from natural language,” Proceedings of the ACM on Programming
Languages, vol. 1, no. OOPSLA, p. 63, 2017.
12) ’FILTER’: lambda function, xs: [x for x in xs if func- [7] S. Mechtaev, J. Yi, and A. Roychoudhury, “Directfix: Looking for simple
tion(x)] program repairs,” in Proceedings of the 37th International Conference
13) ’COUNT’: lambda function, xs: len([x for x in xs if on Software Engineering-Volume 1. IEEE Press, 2015, pp. 448–458.
[8] S. Jha, S. Gulwani, S. A. Seshia, and A. Tiwari, “Oracle-guided
function(x)]) component-based program synthesis,” in Proceedings of the 32nd
14) ’+1’: lambda x: x + 1 ACM/IEEE International Conference on Software Engineering-Volume
15) ’-1’: lambda x: x - 1 1. ACM, 2010, pp. 215–224.
[9] B. Karlik and A. V. Olgac, “Performance analysis of various activation
16) ’*2’: lambda x: x * 2 functions in generalized mlp architectures of neural networks,” Inter-
17) ’/2’: lambda x: int(x / 2) national Journal of Artificial Intelligence and Expert Systems, vol. 1,
18) ’*-1’: lambda x: -x no. 4, pp. 111–122, 2011.
[10] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified
19) ’**2’: lambda x: x ** 2 activations in convolutional network,” arXiv preprint arXiv:1505.00853,
20) ’*3’: lambda x: x * 3 2015.
21) ’/3’: lambda x: int(x / 3) [11] J. Jagannath, N. Polosky, D. O’Connor, L. N. Theagarajan, B. Sheaffer,
S. Foulke, and P. K. Varshney, “Artificial neural network based automatic
22) ’*4’: lambda x: x * 4 modulation classification over a software defined radio testbed,” in 2018
23) ’/4’: lambda x: int(x / 4) IEEE International Conference on Communications (ICC). IEEE, 2018,
24) ’> 0’: lambda x: x > 0 pp. 1–6.
[12] S. Gulwani, “Automating string processing in spreadsheets using input-
25) ’<0’: lambda x: x < 0 output examples,” in ACM Sigplan Notices, vol. 46, no. 1. ACM, 2011,
26) ’EVEN’: lambda x: x pp. 317–330.
27) ’ODD’: lambda x: x [13] Y. Feng, R. Martins, O. Bastani, and I. Dillig, “Program synthesis using
conflict-driven learning,” in ACM SIGPLAN Notices, vol. 53, no. 4.
28) ’+’: lambda x, y: x + y ACM, 2018, pp. 420–435.
29) ’-’: lambda x, y: x - y [14] A. Abate, C. David, P. Kesseli, D. Kroening, and E. Polgreen, “Coun-
30) ’*’: lambda x, y: x * y terexample guided inductive synthesis modulo theories,” in International
Conference on Computer Aided Verification. Springer, 2018, pp. 270–
31) ’min’: min 288.
32) ’max’: max [15] N. Polikarpova. Cse 291: Program synthesis. [Online]. Available:
https://github.com/nadia-polikarpova/cse291-program-synthesis/wiki/
A PPENDIX B
N EURAL N ETWORK M ODELS
1) Model 1: leakyRelu_nadam_point_5
2) Model 2: leakyRelu_nadam_point_6
3) Model 3: extra_2_layers_nadam
4) Model 4: base_nadam
5) Model 5: tanh_tanh_tanh
6) Model 6: extra_2_layers_nadam
7) Model 7: base
8) Model 8: leakyRelu_nadam
9) Model 9: relu_relu_sigmoid_nadam
10) Model 10: relu_relu_sigmoid

0199

Authorized licensed use limited to: Auckland University of Technology. Downloaded on May 26,2020 at 21:35:02 UTC from IEEE Xplore. Restrictions apply.

You might also like