Distributed AI Program

Distributed AI Program Technical Definition

WE-AI : Distributed AI Service Technical Definition

The purpose of this technical definition document is to provide a detailed description of the development of a node program for distributed AI processing, which utilizes the idle GPU resources of personal computers. This program aims to address the challenges arising from the proliferation of AI and the insufficient supply of GPU computing. By leveraging the idle resources of personal computers, the program offers benefits to individual PC users and enables efficient and scalable AI computations.

The mission of WE-AI service

The primary objectives of the distributed AI processing node program are as follows:

  • Utilize the idle GPU resources of personal computers to perform distributed AI computations.

  • Design and implement an efficient architecture for distributed processing.

  • Ensure the security and privacy of data and operations within the program.

  • Establish a reliable and scalable system to support a large user base.

  • Implement a reward system using WEBI TOKEN to incentivize users for providing idle resources.

  • Expand the program to utilize a decentralized blockchain ledger for data distribution and persistence.

Requirements

1) Hardware Requirements

  • The program requires NVIDIA GPUs installed in personal computers.

  • The minimum specifications will be defined based on the performance and compatibility of the GPU, targeting users who have GPUs that meet or exceed those specifications.

  • To ensure stable handling of the heat and power consumption generated during GPU usage, an appropriate cooling system is necessary.

2) Software Requirements

  • Operating System: The program supports various operating systems such as Windows,Linux. The program should be able to operate independently on the user's personal computer.

  • Provide installation and management tools that allow users to easily install and manage the program.

  • Embed the necessary deep learning frameworks or libraries for distributed AI processing into the program or provide instructions for users to install them directly.

Architecture Design

The program will be designed with the following architectural components:

  • Node management module: Responsible for registering, monitoring, and managing individual PC nodes.

  • Task distribution module: Handles the partitioning and distribution of AI computational tasks to the available nodes.

  • Communication module: Facilitates communication and data exchange between nodes and the central control system.

  • Resource utilization module: Optimizes the allocation and utilization of GPU resources across the network of nodes.

  • Security module: Implements security measures to ensure data privacy, authentication, and secure communication.

Implementation Technology

Dynamic Allocation of GPU Resources To achieve distributed processing using the idle resources of personal computers for large-scale computation, concepts from cloud computing and distributed computing can be utilized. The following is the process of distributed processing using the idle resources of personal computers:

1) Platform Construction: A platform for distributed processing needs to be established. This platform is connected to personal computers, manages resources, and distributes tasks. The platform should include functions such as task scheduling, resource allocation and management, and communication.

2) Task Partitioning: The tasks that need to be processed are divided into smaller units. These tasks should be parallelizable. For example, when processing a large-scale dataset, the data can be divided into smaller blocks or the task can be divided into multiple parts.

3) Task Distribution: The task platform detects the idle resources of personal computers and assigns tasks to utilize these resources. Resource detection and management mechanisms are required to identify idle resources and allocate available tasks. The tasks are delivered to personal computers with idle resources and processed in parallel.

4) Task Execution: The assigned tasks are performed on personal computers. The tasks are executed using the local resources of the personal computers, and the results of the tasks are returned to the platform. Network communication can be used for data sharing between tasks.

5) Result Integration: After returning the results of the tasks to the platform, the platform integrates the results to generate the final outcome. Depending on the requirements, the final results can be sent back to the personal computers or delivered to other systems for utilization. After validating the final results, the participants in the tasks are rewarded with WEBI tokens.

Security and Privacy

The program will prioritize the security and privacy of user data and operations by implementing the following measures:

  • Encryption of data transmission between nodes and the central control system.

  • Access control mechanisms to prevent unauthorized access to user data.

  • Secure storage and handling of sensitive user information.

  • Anonymization of user data to protect privacy.

Reward (WEBI TOKEN) Payment System

  • Establishing a system that rewards owners of personal computers for providing resources by issuing virtual assets as compensation.

  • Providing features for accumulating and converting virtual assets based on resource contributions, and offering users a way to cash out their virtual assets at any time.

  • The virtual asset management system should be implemented with consideration for security and transparency.

User Interface

  • Providing a user-friendly interface that allows users to easily install and operate the program.

  • Visualizing the utilization status of GPU resources, the accumulation and usage history of virtual assets, and providing users with easy access to information.

  • Collecting and incorporating feedback from users to continuously improve the program.

Performance and Scalability

The program will be designed to achieve high performance and scalability by considering the following factors:

  • Efficient load balancing to distribute tasks evenly across available nodes.

  • Parallel processing techniques to leverage the computational power of multiple GPUs.

  • Monitoring and optimization of system performance to ensure smooth and efficient operations.

  • Scalable architecture that can accommodate an increasing number of users and AI computations.

WE-AI Service Monetization Model

The distributed AI computing node program utilizing idle GPU resources has great potential for commercialization. It enables efficient utilization of unused hardware and provides incentives for personal PC owners to contribute to the distributed network. By deploying node programs specialized for personal PCs and forming a distributed node network, personal PCs can collectively contribute to AI computations. Additionally, by implementing robust security measures and partnership collaborations, a solution can be built where both personal PC owners and WE-AI ecosystem participants can benefit.

  • Partnership and Collaboration Framework

Identify potential use cases and establish partnerships by collaborating with AI solution providers, research institutions, and companies. Provide the distributed AI computing node program (WE-AI) tailored to the needs of partner companies, such as large-scale data processing and complex computation processing. Offer suitable distributed AI computing node program products based on the form of the client.

  • Market Acceptance and Promotion

Provide an incentive system targeting personal PC owners interested in AI and distributed computing. Calculate and distribute rewards based on the contributions of each participant.

  • Revenue Model

A portion of the rewards earned by participating nodes will be retained to support the operation and further development of the distributed network. Additionally, explore revenue-sharing plans with AI service providers.

Last updated