Authors:
Arlena Wellßow
1
;
2
;
Torben Logemann
1
and
Eric MSP Veith
1
;
2
Affiliations:
1
Adversarial Resilience Learning, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
;
2
OFFIS – Institute for Information Technology, Oldenburg, Germany
Keyword(s):
Modeling, Offline Learning, Simulation, Expert Knowledge.
Abstract:
Reinforcement learning has shown its worth in multiple fields (e. g., voltage control, market participation). However, training each agent from scratch leads to relatively long time and high computational power costs. Including expert knowledge in agents is beneficial, as human reasoning is coming up with independent solutions for unusual states of the (less complex) system, and, especially in long-known fields, many strategies are already established and, therefore, learnable for the agent. Using this knowledge allows agents to use these solutions without encountering such situations in the first place. Expert knowledge is usually available only in semi-structured, non-machine-readable forms, such as (mis-) use cases. Also, data containing extreme situations, grid states, or emergencies is usually limited. However, these situations can be described as scenarios in these semi-structured forms. A State machine representing the scenarios’ steps can be built from thereon to generate dat
a, which can then be used for offline learning. This paper shows a model and a prototype for such state machines. We implemented this prototype using state machines as internal policies of agents without a learning part for four specified scenarios. Acting according to the actuator setpoints the state machines emit, the agent outputs control the connected simulation. We found that our implemented prototype can generate data in the context of smart grid simulation. These data samples show the specified behavior, cover the search space through variance, and do not include data that violates given constraints.
(More)