Download textbook Ai Ia 2016 Advances In Artificial Intelligence Xvth International Conference Of The Italian Association For Artificial Intelligence Genova Italy November 29 December 1 2016 Proceedings 1St Edition Gio ebook all chapter pdf
Download textbook Ai Ia 2016 Advances In Artificial Intelligence Xvth International Conference Of The Italian Association For Artificial Intelligence Genova Italy November 29 December 1 2016 Proceedings 1St Edition Gio ebook all chapter pdf
Download textbook Ai Ia 2016 Advances In Artificial Intelligence Xvth International Conference Of The Italian Association For Artificial Intelligence Genova Italy November 29 December 1 2016 Proceedings 1St Edition Gio ebook all chapter pdf
https://textbookfull.com/product/artificial-intelligence-
xxxvi-39th-sgai-international-conference-on-artificial-
intelligence-ai-2019-cambridge-uk-
Artificial Intelligence XXXVII 40th SGAI International
Conference on Artificial Intelligence AI 2020 Cambridge
UK December 15 17 2020 Proceedings 1st Edition Max
Bramer
https://textbookfull.com/product/artificial-intelligence-
xxxvii-40th-sgai-international-conference-on-artificial-
intelligence-ai-2020-cambridge-uk-
december-15-17-2020-proceedings-1st-edition-max-bramer/
https://textbookfull.com/product/ai-2019-advances-in-artificial-
intelligence-32nd-australasian-joint-conference-adelaide-sa-
australia-december-2-5-2019-proceedings-jixue-liu/
https://textbookfull.com/product/agents-and-artificial-
intelligence-8th-international-conference-icaart-2016-rome-italy-
february-24-26-2016-revised-selected-papers-1st-edition-jaap-van-
Giovanni Adorni · Stefano Cagnoni
Marco Gori · Marco Maratea (Eds.)
AI*IA 2016:
LNAI 10037
Advances in
Artificial Intelligence
XVth International Conference
of the Italian Association for Artificial Intelligence
Genova, Italy, November 29 – December 1, 2016, Proceedings
123
Lecture Notes in Artificial Intelligence 10037
AI*IA 2016:
Advances in
Artificial Intelligence
XVth International Conference
of the Italian Association for Artificial Intelligence
Genova, Italy, November 29 – December 1, 2016
Proceedings
123
Editors
Giovanni Adorni Marco Gori
University of Genoa University of Siena
Genova Siena
Italy Italy
Stefano Cagnoni Marco Maratea
University of Parma University of Genoa
Parma Genova
Italy Italy
This volume collects the contributions presented at the XV Conference of the Italian
Association for Artificial Intelligence (AI*IA 2016). The conference was held in
Genova, Italy, from November 28 to December 1, 2016. The conference is organized
by AI*IA (the Italian Association for Artificial Intelligence), and it is held annually.
The conference received 53 submissions. Each paper was carefully reviewed by at
least three members of the Program Committee, and finally 39 papers were accepted for
publication in these proceedings.
Following the 2013 and 2015 editions of the conference, we adopted a “social”
format for the presentations: The papers were made available to conference participants
in advance. Each paper was shortly presented at the conference, then assigned a time
slot and a reserved table where the authors were available for discussing their work
with the interested audience. The aim of this format is to foster discussion and facilitate
idea exchange, community creation, and collaboration.
AI*IA 2016 featured exciting keynotes by Pietro Leo, Executive Architect, IBM
Italy, CTO for Big Data Analytics; Giorgio Metta, Vice Scientific Director of IIT; and
Dan Roth, University of Illinois at Urbana-Champaign.
The conference program also included seven workshops: the Second Italian
Workshop on Artificial Intelligence for Ambient Assisted Living (AAL 2016), the
Third Workshop on Artificial Intelligence and Robotics (AIRO 2016), the 10th Italian
Workshop on Artificial Intelligence for Cultural Heritage (AI*CH 2016), the 5th Italian
Workshop on Machine Learning and Data Mining (MLDM.it 2016), the 23rd RCRA
International Workshop on Experimental Evaluation of Algorithms for Solving Prob-
lems with Combinatorial Explosion (RCRA 2016), and the First Italian Workshop on
Deep Understanding and Reasoning: A Challenge for Next-Generation Intelligent
Agents (URANIA 2016), plus a Doctoral Consortium.
The chairs wish to thank the Program Committee members and the reviewers for
their careful work in selecting the best papers, the chairs of the workshops and of the
Doctoral Consortium for organizing the corresponding events, as well as Angelo
Ferrando, Frosina Koceva, and Laura Pandolfo for their help during the organization
of the conference.
AI*IA 2016 was organized by AI*IA (in Italian, Associazione Italiana per l’Intelli-
genza Artificiale), in cooperation with the Department of Informatics, Bioengineering,
Robotics and Systems Engineering and the Polytechnic School of the University of
Genoa (Italy).
Executive Committee
General Chair
Giovanni Adorni Università di Genova, Italy
Program Chairs
Stefano Cagnoni Università di Parma, Italy
Marco Gori Università di Siena, Italy
Marco Maratea Università di Genova, Italy
Local Chairs
Alessio Merlo Università di Genova, Italy
Simone Torsani Università di Genova, Italy
Program Committee
Matteo Baldoni Università di Torino, Italy
Stefania Bandini Università Milano-Bicocca, Italy
Roberto Basili Università di Roma Tor Vergata, Italy
Nicola Basilico Università di Milano, Italy
Federico Bergenti Università di Parma, Italy
Stefano Bistarelli Università di Perugia, Italy
Luciana Bordoni ENEA, Italy
Francesco Buccafurri Università Mediterranea di Reggio Calabria, Italy
Amedeo Cappelli CNR, Italy
Luigia Carlucci Aiello Sapienza Università di Roma, Italy
Amedeo Cesta CNR, Italy
Antonio Chella Università di Palermo, Italy
Carlo Combi Università di Verona, Italy
Gabriella Cortellessa CNR, Italy
Stefania Costantini Università dell’Aquila, Italy
VIII Organization
Additional Reviewers
Sponsoring Institutions
AI*IA 2016 was partially funded by the Artificial Intelligence journal, by the
Department of Informatics, Bioengineering, Robotics and Systems Engineering, and
the Polytechnic School, by the Istituto Nazionale di Alta Matematica “F. Severi” —
Gruppo Nazionale per il Calcolo Scientifico, by Camlin Italy, by the city of Genoa, and
by the Italian Association for Artificial Intelligence.
Contents
Multi-agent Systems
Machine Learning
Formal Verification
1 Introduction
In the Traveling Salesperson Problem (TSP) we aim to find a minimal cost
roundtrip tour in an edge-weighted graph, which visits each node exactly once
and returns to the starting node. A plethora of algorithmic approaches for this
famous NP-hard combinatorial problem was developed in the past decades. Inex-
act solvers for the TSP are known to produce high-quality solutions in reasonable
time compared to exact solvers such as Concorde [1]. Recently, the EAX solver [2]
was shown to be competitive to the well-known State of the Art LKH algorithm
[3], more specifically respective restart variants LKH+restart and EAX+restart
even improve the original versions [4] on the Euclidean TSP. However, there is
no single inexact solver which operates best on all possible problem instances
regarding solution quality. In this work, we investigate performance differences
of the current State of the Art TSP solvers on specifically evolved instances.
Efficient algorithm selection approaches [5] in this field are conducted in
a feature- and instance-based fashion. TSP features, e.g. in [6–9]1 , are com-
puted on benchmark instances and related to algorithm performance allowing for
1
These feature sets are available in the R-package salesperson [10].
c Springer International Publishing AG 2016
G. Adorni et al. (Eds.): AI*IA 2016, LNAI 10037, pp. 3–12, 2016.
DOI: 10.1007/978-3-319-49130-1 1
4 J. Bossek and H. Trautmann
PA (I)
P(A,B) (I) =
PB (I)
on a specific instance I, where PA (I) and PB (I) are the solver performances
of solver A and B on instance I. Solver performance in our scenario is either
determined by the standard indicator penalized average runtime or by the penal-
ized median runtime. The former repeatedly measures the runtime of the solver
on an instance until the optimal tour (pre-computed by Concorde) has been
found and computes the arithmetic mean subsequently. In case the cutoff time
timeLimit is reached, ten times the cutoff time is used for further computations
as a penalty. However, inside the EA, the actual cutoff time is used ensuring
that the probability of removal of such a solution at later algorithm stages is
not unreasonably low. The evaluation at the final generation uses the classical
par10 score with the full penalty. The median score is much more insensitive to
outliers and maximum ratio in medians is much harder to obtain.
Understanding Characteristics of Evolved Instances 5
Algorithm 1. Evolving EA
1: function EA(fitnessFun, popSize, instSize, generations, timeLimit, cells,
rnd=true)
2: poolSize = popSize / 2
3: population = generateRandomInstances(popSize, instSize) in [0, 1]2
4: while stopping condition not met do
5: for i = 1 → popSize do
6: fitness[i] = fitnessFun(population[i])
7: end for
8: matingPool = createMatingPool
9: 2-tournament-selection
10: offspring[1] = getBestFrom
11: CurrentPopulation 1-elitism
12: for i = 2 → popSize do
13: Choose p1 and p2 randomly from the
14: mating pool
15: offspring[i] = applyVariations(p1, p2)
16: Rescale offsspring to [0, 1]2 by dimension
17: if rnd then
18: Round each point to nearest cell grid
19: end if
20: end for
21: population = offspring
22: end while
23: end function
The initial population of size popSize is randomly generated in [0, 1]2 for
instances of size instSize and the performance ratio is computed. Distances are
scaled by multiplying with a factor of 100 and afterwards rounded to the nearest
integer. This step is neccassary since EAX expects integer distances. The EA is
then run for a fixed number of generations and the evolutionary loop is executed
as follows: The mating pool is formed by 2-tournament selection supplemented by
the best solution of the current population (1-elitism). Two different mutation
operators are applied to each combination of randomly drawn instance pairs
of the mating pool. Uniform mutation replacing coordinates of selected nodes
with new randomly chosen coordinates is applied with a very low probability
possibly followed by gaussian mutation adding normal noise to the selected point
coordinates. Therefore, global as well as local changes can come into effect. In
the current version the EA does not use any recombination operator. A single
EA generation ends after rescaling the instance to completely cover [0, 1]2 and,
if rnd = true, rounding the points to the nearest cell grid. The latter relates to
important relevant structures in practice such as the structural design of circuit
boards.
6 J. Bossek and H. Trautmann
3 Experiments
3.1 Experimental Setup
In total 200 TSP instances were evolved. For all four considered opti-
mization directions, i.e. P(LKH,EAX) , P(EAX,LKH) , P(LKH+restart,EAX+restart)
and P(EAX+restart,LKH+restart) , in each case 25 instances were generated
with activated and deactivated rounding respectively. Based on prelimi-
nary experiments and experimental results of [8,12] the EA parameters
were set as follows: timeLimit = 120, popSize = 30, generations =
5000, unif ormM utationRate = 0.05, normalM utationRate = 0.1,
normalM utationSD = 0.025 and cells = 100. We used the reference imple-
mentation LKH 2.0.7 based on the former implementation 1.3 [14], the original
EAX implementation as well as specific restart variants as described in [4]. The
solvers were repeatedly evaluated, three times inside the EA due to a limited
computational budget but ten times for final evaluations. As described in Sect. 2
either the par10 score or the median score was computed for the final instances.
For comparison and practical validation, performance ratios of the respec-
tive solvers on TSPLIB instances2 of comparable size, i.e. 200 ≤ instSize ≤ 400
were computed for both kinds of performance measures. Moreover, 100 random
instances in [0, 1]2 were generated while the same rounding strategy of the dis-
tance matrix was applied as used inside the EA for the evolved instances. All
experiments were run on the parallel linux computer cluster PALMA at Univer-
sity of Münster, consisting of 3528 processor cores in total. The utilized compute
nodes are 2,6 GHz machines with 2 hexacore Intel Westmere processors, totally
12 cores per node and 2 GB main memory per core.
Fig. 1. Average (left) and median (right) par10 scores (log-scale) of LKH+restart and
EAX+restart on evolved, random and TSPLIB instances. A specific symbol visualizes
whether instances were rounded to a grid structure (rnd) or not (nrnd).
2
TSPLIB-Instances: a280, gil262, kroA200, kroB200, lin318, pr226, pr264, pr299,
rd400, ts225, tsp225.
Understanding Characteristics of Evolved Instances 7
3.2 Results
Figures 1 and 2 give an overview about the performance scores of the considered
solver pairs, i.e., both for the original as well as the restart variants. Evolved
instances are visualized together with random and TSPLIB instances.
Fig. 2. Average (left) and median (right) par10 scores (log-scale) of LKH and EAX on
evolved, random and TSPLIB instances. A specific symbol visualizes whether instances
were rounded to a grid structure (rnd) or not (nrnd).
Fig. 3. Mean (left) and median par10 scores (right) of the four solvers depending on
rounding (rnd) and type of optimization (log-scale).
Evaluating with median scores shows that especially the pairing (LKH,EAX),
i.e. generating easier instances for LKH, does not show the desired performance
effects. Figure 4 explicitly provides boxplots of the performance ratios P(A,B) and
thus summarizes all effects previously listed, in particular the huge differences
in performance ratios compared to random and TSPLIB instances.
Fig. 4. Performance ratios based on mean par10 scores of the four considered algo-
rithms on all considered instance sets.
identified as the crucial feature separating both instance classes. The results coni-
cide with the results of [8] where the mean distance of the minimum spanning
tree was identified as a separating feature between easy and hard instances for
TSP approximation algorithms. This result is promising with respect to future
work in the algorithm selection context: The computation of minimum spanning
tree characteristics is an computationally cheap task and we strive for cheap fea-
ture, since wasting a lot of runtime for the feature computation before actually
solving the TSP itself is senseless.
The same analysis was conducted for the original solver variants. However,
as evolved instances are much denser in the lower right and upper left corner in
Fig. 2 than in the restart case, we only selected the respective five most extreme
instances. In this case different features play a key role in explaining solver per-
formance differences including nearest-neighbor based features as visualized in
Fig. 5. Again, the median misclassification error vanishes while the mean mis-
classification error is 0.2, i.e. only two out of the ten instances are misclassified.
Fig. 5. Variable Importance Plot of Random Forest distinguishing the (left): ten most
extreme instances w.r.t. performance ratio for the restart variants, (right): five most
extreme instances w.r.t. performance ratio for the original algorithm variants.
solvers even further compared to the resulting median evaluation on the instances
originally generated inside the EA using mean par10 scores. The same is true for
comparing the mean par10 scores on both scenarios (see Fig. 7). However, slight
improvements are visible in case easier instances for LKH+restart are evolved.
Most probably the median alone does not provide enough differentiation between
varying solver results over the repetitions.
However, in our view an adequate combination of mean and median scores
inside the EA fitness function is promising in order to get deeper insights into
solver variance on the considered instances. We will investigate this issue in
further studies together with increasing the number of solver repetitions along
the evolutionary loop.
Understanding Characteristics of Evolved Instances 11
Fig. 7. Comparison of mean and median par10 ratios (log-scale) of instance sets evolved
for LKH+restart and EAX+restart either by using the mean par10 or the median score
inside the EA as fitness function.
4 Conclusions
This work focusses on the two current State of the Art inexact TSP solvers LKH
and EAX together with their respective restart variants. In order to increase
understanding of performance differences of both solvers, a sophisticated evolu-
tionary algorithm was used to evolve instances which lead to maximum perfor-
mance difference of both solvers on the specific instances. Both directions are
analyzed, i.e. we generated instances which are easier for solver A but much
harder for solver B as well as the opposite case. In this regard we observed sub-
stantial differences in solver performance ratios compared to random or TSPLIB
instances on the evolved instances. By feature-based analysis of the most extreme
instances in terms of performance ratio crucial features are identified for both
solver pairings which are indicated to have an influence on solver-specific prob-
lem difficulty. Moreover, we contrasted the classical mean par10 score with a
respective median version to even increase the challenge of evolving instances
with high solver performance differences.
Future studies will focus on generalizing the results to higher instance sizes
and on designing a more sophisticated fitness function inside the EA to even
increase solver performance differences on the evolved instances.
References
1. Applegate, D.L., Bixby, R.E., Chvatal, V., Cook, W.J.: The Traveling Salesman
Problem: A Computational Study. Princeton University Press, Princeton (2007)
2. Nagata, Y., Kobayashi, S.: A powerful genetic algorithm using edge assembly
crossover for the traveling salesman problem. INFORMS J. Comput. 25, 346–363
(2013)
3. Helsgaun, K.: General k-opt submoves for the Lin-Kernighan TSP heuristic. Math.
Program. Comput. 1, 119–163 (2009)
4. Kotthoff, L., Kerschke, P., Hoos, H., Trautmann, H.: Improving the state of the
art in inexact TSP solving using per-instance algorithm selection. In: Dhaenens,
C., Jourdan, L., Marmion, M.-E. (eds.) LION 2015. LNCS, vol. 8994, pp. 202–217.
Springer, Heidelberg (2015). doi:10.1007/978-3-319-19084-6 18
5. Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. AI
Mag. 35, 48–60 (2014)
6. Hutter, F., Xu, L., Hoos, H.H., Leyton-Brown, K.: Algorithm runtime prediction:
methods & evaluation. Artif. Intell. 206, 79–111 (2014)
7. Smith-Miles, K., van Hemert, J.: Discovering the suitability of optimisation algo-
rithms by learning from evolved instances. Ann. Math. Artif. Intell. 61, 87–104
(2011)
8. Mersmann, O., Bischl, B., Trautmann, H., Wagner, M., Bossek, J., Neumann,
F.: A novel feature-based approach to characterize algorithm performance for the
traveling salesperson problem. Ann. Math. Artif. Intell. 69, 1–32 (2013)
9. Pihera, J., Musliu, N.: Application of machine learning to algorithm selection for
TSP. In: Fogel, D., et al. (eds.) Proceedings of the IEEE 26th International Con-
ference on Tools with Artificial Intelligence (ICTAI). IEEE press (2014)
10. Bossek, J.: salesperson: Computation of Instance Feature Sets and R Interface
to the State-of-the-Art Solvers for the Traveling Salesperson Problem. R package
version 1.0 (2015)
11. Fischer, T., Stützle, T., Hoos, H.H., Merz, P.: An analysis of the hardness of
TSP instances for two high-performance algorithms. In: Proceedings of the 6th
Metaheuristics International Conference, Vienna, Austria, pp. 361–367 (2005)
12. Mersmann, O., Bischl, B., Bossek, J., Trautmann, H., Markus, W., Neumann, F.:
Local search and the traveling salesman problem: a feature-based characterization
of problem hardness. In: Hamadi, Y., Schoenauer, M. (eds.) LION 6. LNCS, vol.
7219, pp. 115–129. Springer, Heidelberg (2012)
13. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., Trautmann,
H.: A feature-based comparison of local search and the christofides algorithm for
the travelling salesperson problem. In: Foundations of Genetic Algorithms (FOGA)
(2013) (accepted)
14. Lacoste, J.D., Hoos, H.H., Stützle, T.: On the empirical time complexity of state-
of-the-art inexact tsp solvers. Optimization Letters (to appear)
15. Kerschke, P., Dagefoerde, J.: flacco: Feature-Based Landscape Analysis of Contin-
uous and Constraint Optimization Problems. R package version 1.1 (2015)
On-line Dynamic Station Redeployments
in Bike-Sharing Systems
Carlo Manna(B)
Insight Research Centre for Data Analytics, University College Cork, Cork, Ireland
[email protected]
1 Introduction
Bike-sharing systems (BSS) are in place in several cities in the world, and are
an increasingly important support for multi-modal transport systems [1]. BSS
are widely adopted with 747 active systems, a fleet of over 772,000 bicycles
and 235 systems in planning or under construction [2]. A BSS typically has a
number of base stations scattered throughout a city. At the beginning of the
day, each station is stocked with a pre-determined number of bikes. Users with a
membership card can pick up and return bikes from any designated station, each
of which has a finite number of docks. At the end of the work day, trucks are used
to move bikes around so as to return to some pre-determined configuration at the
beginning of the day. Due to the individual movement of customers according to
their needs, there is often congestion (more than required) or starvation (fewer
than required) of bikes at certain base stations. According to CapitalBikeShare
Company [3], in a city like Washington, at a minimum, there are around 100
c Springer International Publishing AG 2016
G. Adorni et al. (Eds.): AI*IA 2016, LNAI 10037, pp. 13–25, 2016.
DOI: 10.1007/978-3-319-49130-1 2
14 C. Manna
cases of empty stations and 100 cases of full stations per day and at a maximum
there are about 750 cases of empty stations and 330 cases of full stations per day.
As demonstrated in [4] this can result in a significant loss of customer activity.
Such loss in demand can have two undesirable outcomes: (a) loss in revenue; (b)
increase in carbon emissions, as people resort to less environmentally-friendly
modes of transport. To address such a problem, the majority of the proposed
solutions aim to find more efficient methods for dynamic rebalancing of the
number of bicycles on the base stations taking into account the uncertainty of
the customer demand or predicting the customer demand at each station [5,6].
Basically, by using trucks a number of bicycles are transferred from one station
to another to accomplish the upcoming demand. This operation takes place ones
a day (or more in some specific situations).
The aim of this paper is to study a totally different approach, in which
the fixed base stations are augmented by a number of movable stations (using
trucks) with the dual purpose of both: (1) dynamically adding/re-allocating dock
stations in city areas to match customer demand in real-time and, (2) a dynamic
re-balancing of the number of bicycles in some particular fixed station where a
redeployment of a docking station is unnecessary.
Particularly, we consider a problem in which the day time is partitioned in
time intervals of equal length. We suppose that for each of those time periods,
the probability distributions of the travel demands between different locations
are known. At start of the day, we compute the best possible locations for the
dock stations in each time period, taking into account the stochastic nature of
the demand, with the aim to maximize the number of customers. This is an
on-line problem. That means, although the solution is a sequence of decisions
for each time period, only the immediate decision is actually taken (i.e. the
station allocations for the incoming time period). While this time period, a new
computation is performed, using more updated travel demand predictions, and a
new decision for the next period is taken from the re-computed decision sequence.
This carry on until the termination of the time horizon.
The main advantage of such a system is that the bike stations configuration
is not fixed, but it can change adaptively with the travel demand day by day
and, in each day, it can change during a number of time periods, in the respect of
specific time constraints (i.e. a new configuration must be computed in advance
with enough time to allow the repositioning of the stations). Finally, updating
the decisions each time periods, allow reducing the uncertainty in the predictions.
This because the majority of the travel demand prediction techniques are based
on auto-regressive models, which make use of the most recent known data to
predict future outcomes [7].
The key distinction from existing research on bike sharing is that we consider
the dynamic redeployment of bicycle stations (instead of just rebalancing the
number of bicycles in the existing stations). This approach from one side does
not exclude the possibility to rebalancing as in the existing research. However
this extends such a research to a novel approach to the BSS by which is possible
to dynamically change the configuration of the dock stations in real-time to
On-line Dynamic Station Redeployments in Bike-Sharing Systems 15
maximize the potential customer demand. In doing this, while numerous models
exist in literature, in this paper the potential customer demand is based on a
primary concept: the distance decay. That is “the attenuation of a pattern or a
process with distance” [8]. In other words, people are less willing to use a facility
allocated too far from them. This is a focal concept in a variety of modelling
contexts, such as transporation, migration and location theory [9].
Specifically, our key contributions are as follows:
2 Related Work
Although bike sharing systems are relatively new, they have been studied exten-
sively in the literature. For this reason, we only focus on threads of research that
are of relevance to this paper. However, on the best of our knowledge, there is
no any previous work for on-line stochastic redeployment of moveable stations
in bike-sharing.
The first thread of research focus on the bicycles rebalancing between the
stations. Particularly, [12–14] focus on the problem of finding routes at the end
of the day for a set of carriers to achieve the desired configuration of bikes
across the base stations. They have provided scalable exact and approximate
approaches to this problem by either abstracting base stations into mega sta-
tions or by employing insights from inventory management or by using variable
neighbourhood search based heuristics. Those works assume there is only one
fixed redeployment of bikes that happens at the end of the day. In contrast,
[15] predict the stochastic demand from user trip data of Singapore metro sys-
tem using poisson distribution and provide an optimization model that suggests
the best location of the stations and a dynamic bicycles redeployment for the
model to minimize the number of unsatisfied customers. However, they assume
that redeployment of bikes from one station to another is always possible with-
out considering the routing of carriers, which is a major cost driver for the
bike-sharing company. In [16] they overcome this problem, developing a mixed
16 C. Manna
integer programming formulation which includes the carrier routes into the opti-
mization model. Finally, other relevant works have been proposed in [17] to deal
with unmet demand in rush hours. They provide a myopic redeployment policy
by considering the current demand. They employed Dantzig-Wolfe and Benders
decomposition techniques to make the decision problem faster. [18] also provides
a myopic online decisions based on assessment of demand for the next 30 min.
The second thread of research is complementary to the work presented in
this paper is on demand prediction and analysis. [19] provides a service level
analysis of the BSS using a dualbounded joint-chance constraints where they
predict the near future demands for a short period of time. Finally, in [20], the
BSS is represented as a dual markovian waiting system to predict the actual
demand.
As we already highlighted, all the aforementioned works differ from the one
proposed in this paper as we consider a dynamic re-allocation of a certain number
of bicycle stations during the day. This lead to a formulation of the problem
which is different from all the previous provided in literature.
3 Problem Description
In this section we formally describe the bike-sharing system with dynamic rede-
ployment. It is compactly described using the following tuple: T, A, K, S, P, D,
X, X̂, δ, l, where A represents the set of areas for which the demand has to be
covered, K represents the set of possible locations for the dock stations with
K ⊆ A, T is the time horizon. S is a binary vector representing the totality
of decisions on the allocated stations, with Skt denotes the decision on whether
or not enabling a dock station in k ∈ K at time t ∈ T . Furthermore, P is a
binary vector representing the distribution of the service coverage between areas
t
and dock stations. In particular, Pa,k denotes whether or not the area a ∈ A is
served by the station in k ∈ K, at time t ∈ T . D is a vector of the distances
(or travel time) between all areas and station locations, with da,a (da,k ), the
distance/travel time between the area a and a (between the area a and the sta-
tion in k). X is a vector of the probability distribution of the potential customer
demand between different areas, with xta,a denoting the potential travel demand
from the area a to the area a (with (a, a ) ∈ A), at time t ∈ T . The potential
customer demand at time t denotes the maximum number of possible customers
to be served. This is expressed in form of probability distribution. Similarly, X̂
is a vector of the expected covered customer demand between different areas,
with x̂ta,a denoting the expected travel demand from the area a to the area a
(with (a, a ) ∈ A), at time t ∈ T . Finally δ ∈ [0, 1] is a distance decay parame-
ter [8], through which we take into account the aforementioned distance decay
concept. Finally l is the maximum number of moveable stations. The expected
covered customer demand denotes the number of customers which is expected
to use the bike-sharing system. Hence, given a potential demand xta,a , it holds
x̂ta,a xta,a for each a ∈ A and t ∈ T . X̂ depends on various factors, such as the
On-line Dynamic Station Redeployments in Bike-Sharing Systems 17
total distance/travel time between the customer location and the dock station,
and from the distance decay parameter δ.
Given the potential customer demand X instantiated from a known prob-
ability distribution, at each time step t ∈ T , the goal is to maximize a profit
function W (·) of the overall expected covered demand X̂ over the total time
horizon T, subject on some problem specific constraints C(·) (both are specified
in the following paragraph). This is achieved through finding the best possible
sequence of decisions S 1 , . . . , S T , concern the locations of the dock stations
over the entire time horizon T.
Finally, as reported in the relevant literature [21], in defining the solution
approach, we have made the following assumptions:
1. the expected covered demand decrease with the total distance travelled on
foot (i.e. the distance between the starting point and the pick up station,
and the distance between the return station and the arrival point), given a
predefined decay parameter δ.
2. each area can be served by only one station.
The assumption (1) states that the distance travelled on bicycle between the
stations does not have any negative impact on the demand. This because the
potential demand X is already estimated considering this factor. Conversely,
the distance travelled on foot does have a negative impact on the service usage.
The assumption (2) states that the totality of the demand in an area a ∈ A,
considering a either starting or arrival point, can be served by only one station
located at k ∈ K. This hypothesis is one of the most used in many facility
location models, which assumes that the customer always use the closest facility.
4 Solution Approach
{478}
{480}
Great Britain,
Papers by Command: C. 9507, 1899, pages 24 and 34.
SOUTH AFRICA: The Transvaal: A. D. 1899 (March).
Petition of British subjects to the Queen.
{482}
"President.—I will think over what has been said, and will try
and meet every difficulty.