Use case

AI Server Utilizing Micro Mobility Big Data Collection

Utilizing WE-AI for Micro Mobility Data AI Server Implementation

This technical guideline aims to provide guidance on constructing an AI server that collects and utilizes multi-layered big data in the field of micro mobility, and applying the WE-AI distributed processing program to it. The objective is to effectively manage and process micro mobility data through the utilization of the WE-AI distributed processing program, with the goal of enhancing services and improving business performance.

Micro Mobility Big Data

Micro Mobility Multi-Layer Big Data refers to an approach that involves processing and analyzing large amounts of data generated from micro mobility services (e.g., electric scooters, bike-sharing, car-sharing) in a multi-layered structure. The goal is to process the data using various layers and extract value from it.

Micro mobility services generate real-time data from various sources such as GPS, sensors, and payment systems. This data includes information on movement patterns, usage volume, charging status, and user behavior. By utilizing the multi-layer big data approach, this data can be processed in the following stages:

  • Collection and storage: The data generated from micro mobility services is collected and stored in an AI server. This ensures more reliable storage of real-time micro mobility data and enables efficient processing.

  • Preprocessing: The collected data undergoes preprocessing to transform it into a suitable format for analysis. This stage involves data cleansing, outlier handling, missing value treatment, and validation of location data accuracy.

  • Integration and analysis: Data from different sources is integrated to gain valuable insights. For example, analyzing user movement patterns, identifying traffic congestion, and integrating location data with usage pattern data for analyzing movement patterns. It also involves performing analysis tasks such as building demand prediction models.

  • Data visualization: The results of the analysis are visualized to facilitate intuitive understanding. This includes visualizing route maps, graphs, and charts to represent data characteristics.

AI Server Utilizing Micro Mobility Big Data Collection

An AI server that processes real-time micro mobility data performs various functions such as large-scale data processing, complex analysis and predictive model building, and real-time responses. It requires an offline infrastructure that enables the transmission of photos and videos to the AI server through edge devices' cameras and mobile networks, allowing real-time video data extraction at a national level. Various cases of analyzing and utilizing data stored in the AI server and applying them to AI models are as follows:

  • Traffic congestion prediction: It is possible to develop a model that predicts the traffic congestion level in a specific area using micro mobility data. This enables users to avoid congested time periods and routes for smoother travel.

  • Movement pattern analysis: By analyzing micro mobility data, it is possible to understand movement patterns. This can be used for urban planning or improving the efficiency of micro mobility services.

  • Optimization of charging station locations: By analyzing the movement patterns and demand data of micro mobility users, it is possible to optimize the locations of charging stations. This allows users to conveniently access charging facilities and minimizes operational costs.

  • Urban environment and public safety maintenance: By using edge device cameras, it is possible to detect damage to infrastructure facilities such as signs and boundary markers, as well as monitor public safety and prevent accidents and fires. In the event of abnormal situations, alarms can be sent to public institutions, and evidence can be secured.

  • HD map creation and digital twin production: It can provide data for creating HD maps and producing digital twins for autonomous driving robots, and so on.

  • User segmentation and targeting: By developing real-time big data on floating population for commercial area analysis and segmenting it, it is possible to understand the characteristics and preferences of each group. This allows for targeted marketing and service provision.

Application of Distributed AI Processing Program to AI Server

For micro-mobility big data, depending on the required analysis, either a single large-scale computational power or a substantial amount of computational power may be necessary. WE-AI calculates the required computational power based on the input data and analysis techniques, and provides an adaptive computing environment by utilizing the GPU of a personal computer in a one-to-one or clustered manner. To apply WE-AI, the following steps are taken:

  1. Configure server cases based on the model and data requirements (clustering, one-to-one).

  2. Set up safety measures to prevent excessive charges by defining the maximum computational power (cost) based on the computational power and model.

  3. Insert the utilization code of WE-AI into the lines of code where analysis is required within the program.

Distributed AI processing using GPUs in personal computers can be seen as forming small-scale clusters. This allows for the utilization of additional computational resources to enhance the performance of AI processing tasks. The following are methods for applying distributed AI processing programs to an AI server:

  1. Distributed architecture design:

  • Master-worker architecture: Build a distributed architecture consisting of a centralized master node and multiple worker nodes. The master node manages task scheduling and resource allocation, while the worker nodes execute the AI processing tasks.

  • Peer-to-peer architecture: As an alternative to the centralized master-worker architecture, consider an architecture that performs distributed processing through peer-to-peer communication. This architecture avoids a single point of failure in a centralized master node.

  1. Client program development:

  • Develop a client program to be installed on personal computers. This program communicates with the AI server, receives task requests, and processes them using the GPU.

  1. Resource management:

  • Distributed resource management: Establish a system for managing and allocating resources from multiple hosts or nodes connected to the AI server. This balances resource usage and avoids overloaded nodes.

  • Resource monitoring: Continuously monitor resource usage on the AI server and establish a system to detect resource shortages or performance degradation.

  1. Task scheduling:

  • Task partitioning: Divide large-scale AI processing tasks into smaller units and distribute them among multiple nodes. This allows for parallel processing of tasks, reducing overall processing time.

  • Scheduling algorithms: Develop scheduling algorithms to determine which node to assign tasks to. These algorithms should consider resource availability, task priorities, network bandwidth, and make optimal decisions.

  1. Communication and data management:

  • Efficient communication: Utilize high-performance network infrastructure for efficient communication between distributed systems and consider methods to optimize data transfer speeds.

  • Data partitioning and synchronization: Consider methods for partitioning and synchronizing large-scale data. Distributed tasks should be able to share and update data simultaneously.

  1. Result collection and integration:

  • Transmit the processed results from clients to the AI server and collect and integrate the results on the AI server. This requires mechanisms to ensure data consistency and integrity.

  1. Error handling and recovery:

  • Implement mechanisms for handling and recovering from communication errors between clients and the AI server, as well as client task processing errors. This maintains system stability and reliability.

Last updated