Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = docker swarm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9769 KiB  
Article
Integrating Machine Learning and MLOps for Wind Energy Forecasting: A Comparative Analysis and Optimization Study on Türkiye’s Wind Data
by Saadin Oyucu and Ahmet Aksöz
Appl. Sci. 2024, 14(9), 3725; https://doi.org/10.3390/app14093725 - 27 Apr 2024
Cited by 1 | Viewed by 1044
Abstract
This study conducted a detailed comparative analysis of various machine learning models to enhance wind energy forecasts, including linear regression, decision tree, random forest, gradient boosting machine, XGBoost, LightGBM, and CatBoost. Furthermore, it developed an end-to-end MLOps pipeline leveraging SCADA data from a [...] Read more.
This study conducted a detailed comparative analysis of various machine learning models to enhance wind energy forecasts, including linear regression, decision tree, random forest, gradient boosting machine, XGBoost, LightGBM, and CatBoost. Furthermore, it developed an end-to-end MLOps pipeline leveraging SCADA data from a wind turbine in Türkiye. This research not only compared models using the RMSE metric for selection and optimization but also explored in detail the impact of integrating machine learning with MLOps on the precision of energy production forecasts. It investigated the suitability and efficiency of ML models in predicting wind energy with MLOps integration. The study explored ways to improve LightGBM algorithm performance through hyperparameter tuning and Docker utilization. It also highlighted challenges in speeding up MLOps development and deployment processes. Model performance was assessed using the RMSE metric, conducting a comparative evaluation across different models. The findings revealed that the RMSE values among the regression models ranged from 460 kW to 192 kW. Focusing on enhancing LightGBM, the research decreased the RMSE value to 190.34 kW. Despite facing technical and operational hurdles, the implementation of MLOps was proven to enhance the speed (latency of 9 ms), reliability (through Docker encapsulation), and scalability (using Docker swarm) of machine learning endeavors. Full article
(This article belongs to the Special Issue AutoML: Advances and Applications)
Show Figures

Figure 1

19 pages, 2803 KiB  
Article
A Comparative Analysis of High Availability for Linux Container Infrastructures
by Marek Šimon, Ladislav Huraj and Nicolas Búčik
Future Internet 2023, 15(8), 253; https://doi.org/10.3390/fi15080253 - 28 Jul 2023
Cited by 1 | Viewed by 2357
Abstract
In the current era of prevailing information technology, the requirement for high availability and reliability of various types of services is critical. This paper focusses on the comparison and analysis of different high-availability solutions for Linux container environments. The objective was to identify [...] Read more.
In the current era of prevailing information technology, the requirement for high availability and reliability of various types of services is critical. This paper focusses on the comparison and analysis of different high-availability solutions for Linux container environments. The objective was to identify the strengths and weaknesses of each solution and to determine the optimal container approach for common use cases. Through a series of structured experiments, basic performance metrics were collected, including average service recovery time, average transfer rate, and total number of failed calls. The container platforms tested included Docker, Kubernetes, and Proxmox. On the basis of a comprehensive evaluation, it can be concluded that Docker with Docker Swarm is generally the most effective high-availability solution for commonly used Linux containers. Nevertheless, there are specific scenarios in which Proxmox stands out, for example, when fast data transfer is a priority or when load balancing is not a critical requirement. Full article
Show Figures

Figure 1

17 pages, 2542 KiB  
Article
Leadership Hijacking in Docker Swarm and Its Consequences
by Adi Farshteindiker and Rami Puzis
Entropy 2021, 23(7), 914; https://doi.org/10.3390/e23070914 - 19 Jul 2021
Cited by 4 | Viewed by 3609
Abstract
With the advent of microservice-based software architectures, an increasing number of modern cloud environments and enterprises use operating system level virtualization, which is often referred to as container infrastructures. Docker Swarm is one of the most popular container orchestration infrastructures, providing high availability [...] Read more.
With the advent of microservice-based software architectures, an increasing number of modern cloud environments and enterprises use operating system level virtualization, which is often referred to as container infrastructures. Docker Swarm is one of the most popular container orchestration infrastructures, providing high availability and fault tolerance. Occasionally, discovered container escape vulnerabilities allow adversaries to execute code on the host operating system and operate within the cloud infrastructure. We show that Docker Swarm is currently not secured against misbehaving manager nodes. This allows a high impact, high probability privilege escalation attack, which we refer to as leadership hijacking, the possibility of which is neglected by the current cloud security literature. Cloud lateral movement and defense evasion payloads allow an adversary to leverage the Docker Swarm functionality to control each and every host in the underlying cluster. We demonstrate an end-to-end attack, in which an adversary with access to an application running on the cluster achieves full control of the cluster. To reduce the probability of a successful high impact attack, container orchestration infrastructures must reduce the trust level of participating nodes and, in particular, incorporate adversary immune leader election algorithms. Full article
(This article belongs to the Special Issue Swarms and Network Intelligence)
Show Figures

Figure 1

19 pages, 982 KiB  
Article
Empirical Performance and Energy Consumption Evaluation of Container Solutions on Resource Constrained IoT Gateways
by Syed M. Raza, Jaeyeop Jeong, Moonseong Kim, Byungseok Kang and Hyunseung Choo
Sensors 2021, 21(4), 1378; https://doi.org/10.3390/s21041378 - 16 Feb 2021
Cited by 8 | Viewed by 2875
Abstract
Containers virtually package a piece of software and share the host Operating System (OS) upon deployment. This makes them notably light weight and suitable for dynamic service deployment at the network edge and Internet of Things (IoT) devices for reduced latency and energy [...] Read more.
Containers virtually package a piece of software and share the host Operating System (OS) upon deployment. This makes them notably light weight and suitable for dynamic service deployment at the network edge and Internet of Things (IoT) devices for reduced latency and energy consumption. Data collection, computation, and now intelligence is included in variety of IoT devices which have very tight latency and energy consumption conditions. Recent studies satisfy latency condition through containerized services deployment on IoT devices and gateways. They fail to account for the limited energy and computing resources of these devices which limit the scalability and concurrent services deployment. This paper aims to establish guidelines and identify critical factors for containerized services deployment on resource constrained IoT devices. For this purpose, two container orchestration tools (i.e., Docker Swarm and Kubernetes) are tested and compared on a baseline IoT gateways testbed. Experiments use Deep Learning driven data analytics and Intrusion Detection System services, and evaluate the time it takes to prepare and deploy a container (creation time), Central Processing Unit (CPU) utilization for concurrent containers deployment, memory usage under different traffic loads, and energy consumption. The results indicate that container creation time and memory usage are decisive factors for containerized micro service architecture. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 3453 KiB  
Review
Smart Containers Schedulers for Microservices Provision in Cloud-Fog-IoT Networks. Challenges and Opportunities
by Rocío Pérez de Prado, Sebastián García-Galán, José Enrique Muñoz-Expósito, Adam Marchewka and Nicolás Ruiz-Reyes
Sensors 2020, 20(6), 1714; https://doi.org/10.3390/s20061714 - 19 Mar 2020
Cited by 19 | Viewed by 5957
Abstract
Docker containers are the lightweight-virtualization technology prevailing today for the provision of microservices. This work raises and discusses two main challenges in Docker containers’ scheduling in cloud-fog-internet of things (IoT) networks. First, the convenience to integrate intelligent containers’ schedulers based on soft-computing in [...] Read more.
Docker containers are the lightweight-virtualization technology prevailing today for the provision of microservices. This work raises and discusses two main challenges in Docker containers’ scheduling in cloud-fog-internet of things (IoT) networks. First, the convenience to integrate intelligent containers’ schedulers based on soft-computing in the dominant open-source containers’ management platforms: Docker Swarm, Google Kubernetes and Apache Mesos. Secondly, the need for specific intelligent containers’ schedulers for the different interfaces in cloud-fog-IoT networks: cloud-to-fog, fog-to-IoT and cloud-to-fog. The goal of this work is to support the optimal allocation of microservices provided by the main cloud service providers today and used by millions of users worldwide in applications such as smart health, content delivery networks, smart health, etc. Particularly, the improvement is studied in terms of quality of service (QoS) parameters such as latency, load balance, energy consumption and runtime, based on the analysis of previous works and implementations. Moreover, the scientific-technical impact of smart containers’ scheduling in the market is also discussed, showing the possible repercussion of the raised opportunities in the research line. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

14 pages, 3452 KiB  
Article
Elastic Computing in the Fog on Internet of Things to Improve the Performance of Low Cost Nodes
by Rafael Fayos-Jordan, Santiago Felici-Castell, Jaume Segura-Garcia, Adolfo Pastor-Aparicio and Jesus Lopez-Ballester
Electronics 2019, 8(12), 1489; https://doi.org/10.3390/electronics8121489 - 6 Dec 2019
Cited by 7 | Viewed by 3001
Abstract
The Internet of Things (IoT) is a network widely used with the purpose of connecting almost everything, everywhere to the Internet. To cope with this goal, low cost nodes are being used; otherwise, it would be very expensive to expand so fast. These [...] Read more.
The Internet of Things (IoT) is a network widely used with the purpose of connecting almost everything, everywhere to the Internet. To cope with this goal, low cost nodes are being used; otherwise, it would be very expensive to expand so fast. These networks are set up with small distributed devices (nodes) that have a power supply, processing unit, memory, sensors, and wireless communications. In the market, we can find different alternatives for these devices, such as small board computers (SBCs), e.g., Raspberry Pi (RPi)), with different features. Usually these devices run a coarse version of a Linux operating system. Nevertheless, there are many scenarios that require enhanced computational power that these nodes alone are unable to provide. In this context, we need to introduce a kind of collaboration among the devices to overcome their constraints. We based our solution in a combination of clustering techniques (building a mesh network using their wireless capabilities); at the same time we try to orchestrate the resources in order to improve their processing capabilities in an elastic computing fashion. This paradigm is called fog computing on IoT. We propose in this paper the use of cloud computing technologies, such as Linux containers, based on Docker, and a container orchestration platform (COP) to run on the top of a cluster of these nodes, but adapted to the fog computing paradigm. Notice that these technologies are open source and developed for Linux operating system. As an example, in our results we show an IoT application for soundscape monitoring as a proof of concept that it will allow us to compare different alternatives in its design and implementation; in particular, with regard to the COP selection, between Docker Swarm and Kubernetes. We conclude that using and combining these techniques, we can improve the overall computation capabilities of these IoT nodes within a fog computing paradigm. Full article
Show Figures

Figure 1

76 pages, 5664 KiB  
Article
A Comprehensive Feature Comparison Study of Open-Source Container Orchestration Frameworks
by Eddy Truyen, Dimitri Van Landuyt, Davy Preuveneers, Bert Lagaisse and Wouter Joosen
Appl. Sci. 2019, 9(5), 931; https://doi.org/10.3390/app9050931 - 5 Mar 2019
Cited by 33 | Viewed by 7516
Abstract
(1) Background: Container orchestration frameworks provide support for management of complex distributed applications. Different frameworks have emerged only recently, and they have been in constant evolution as new features are being introduced. This reality makes it difficult for practitioners and researchers to maintain [...] Read more.
(1) Background: Container orchestration frameworks provide support for management of complex distributed applications. Different frameworks have emerged only recently, and they have been in constant evolution as new features are being introduced. This reality makes it difficult for practitioners and researchers to maintain a clear view of the technology space. (2) Methods: we present a descriptive feature comparison study of the three most prominent orchestration frameworks: Docker Swarm, Kubernetes, and Mesos, which can be combined with Marathon, Aurora or DC/OS. This study aims at (i) identifying the common and unique features of all frameworks, (ii) comparing these frameworks qualitatively and quantitatively with respect to genericity in terms of supported features, and (iii) investigating the maturity and stability of the frameworks as well as the pioneering nature of each framework by studying the historical evolution of the frameworks on GitHub. (3) Results: (i) we have identified 124 common features and 54 unique features that we divided into a taxonomy of 9 functional aspects and 27 functional sub-aspects. (ii) Kubernetes supports the highest number of accumulated common and unique features for all 9 functional aspects; however, no evidence has been found for significant differences in genericity with Docker Swarm and DC/OS. (iii) Very little feature deprecations have been found and 15 out of 27 sub-aspects have been identified as mature and stable. These are pioneered in descending order by Kubernetes, Mesos, and Marathon. (4) Conclusion: there is a broad and mature foundation that underpins all container orchestration frameworks. Likely areas for further evolution and innovation include system support for improved cluster security and container security, performance isolation of GPU, disk and network resources, and network plugin architectures. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

Back to TopTop