Web3-AI Track Panorama: Technical Logic and In-Depth Analysis of Top Projects

Web3-AI Landscape Report: Technical Logic, Scenario Applications, and In-Depth Analysis of Top Projects

As AI narratives continue to heat up, more and more attention is focused on this track. An in-depth analysis of the technical logic, application scenarios, and representative projects in the Web3-AI track has been conducted to comprehensively present the panorama and development trends in this field.

1. Web3-AI: Analysis of Technical Logic and Emerging Market Opportunities

1.1 The Integration Logic of Web3 and AI: How to Define the Web-AI Track

In the past year, AI narratives have been exceptionally popular in the Web3 industry, with AI projects emerging like mushrooms after rain. Although many projects involve AI technology, some projects only use AI in certain parts of their products, and the underlying token economics have no substantial connection to AI products. Therefore, such projects are not included in the discussion of Web3-AI projects in this article.

The focus of this article is on projects that use blockchain to solve issues of production relations while AI addresses productivity problems. These projects provide AI products and utilize Web3 economic models as tools for production relations, complementing each other. We categorize these projects as the Web3-AI track. To help readers better understand the Web3-AI track, we will elaborate on the development process and challenges of AI, as well as how the combination of Web3 and AI perfectly solves problems and creates new application scenarios.

1.2 The Development Process and Challenges of AI: From Data Collection to Model Inference

AI technology is a technology that enables computers to simulate, extend, and enhance human intelligence. It allows computers to perform a variety of complex tasks, from language translation and image classification to facial recognition and autonomous driving applications. AI is changing the way we live and work.

The process of developing artificial intelligence models typically includes the following key steps: data collection and data preprocessing, model selection and tuning, model training and inference. For a simple example, to develop a model for classifying images of cats and dogs, you need to:

  1. Data collection and preprocessing: Collect image datasets containing cats and dogs, using public datasets or collecting real data yourself. Then label each image with a category (cat or dog), ensuring the labels are accurate. Convert the images into a format that the model can recognize, and split the dataset into training, validation, and test sets.

  2. Model Selection and Tuning: Choose an appropriate model, such as Convolutional Neural Networks (CNN), which are well-suited for image classification tasks. Tune the model parameters or architecture according to different requirements; generally, the network depth of the model can be adjusted based on the complexity of the AI task. In this simple classification example, a shallower network depth may be sufficient.

  3. Model Training: You can use GPU, TPU, or high-performance computing clusters to train the model, and the training time is affected by the complexity of the model and the computing power.

  4. Model Inference: The file of the trained model is usually referred to as model weights, and the inference process refers to the procedure of using the already trained model to predict or classify new data. In this process, the test set or new data can be used to test the classification performance of the model, and the effectiveness of the model is typically evaluated using metrics such as accuracy, recall, and F1-score.

After data collection and preprocessing, model selection and tuning, and training, the trained model will perform inference on the test set to derive the predicted values P (probability) for cats and dogs, which indicates the probability that the model infers it is a cat or a dog.

Trained AI models can be further integrated into various applications to perform different tasks. In this example, a cat and dog classification AI model can be integrated into a mobile application, allowing users to upload pictures of cats or dogs to receive classification results.

However, the centralized AI development process has some issues in the following scenarios:

User Privacy: In centralized scenarios, the development process of AI is often opaque. User data may be stolen and used for AI training without their knowledge.

Data source acquisition: Small teams or individuals may face limitations on non-open source data when obtaining data in specific fields (such as medical data).

Model selection and tuning: For small teams, it is challenging to access specific domain model resources or spend a significant amount on model tuning.

Hashrate acquisition: For individual developers and small teams, the high costs of purchasing GPUs and renting cloud hashrate can pose a significant financial burden.

AI Asset Income: Data annotators often struggle to earn an income that matches their efforts, while the research results of AI developers also find it difficult to match with buyers in need.

The challenges existing in centralized AI scenarios can be addressed by combining with Web3. As a new type of production relationship, Web3 is naturally compatible with AI, which represents a new productive force, thereby promoting simultaneous progress in technology and production capacity.

1.3 The Synergistic Effects of Web3 and AI: Role Transformation and Innovative Applications

The combination of Web3 and AI can enhance user sovereignty, providing users with an open AI collaboration platform that transforms them from AI users in the Web2 era to participants, creating AI that can be owned by everyone. At the same time, the integration of the Web3 world and AI technology can spark more innovative application scenarios and gameplay.

Based on Web3 technology, the development and application of AI will usher in a brand new collaborative economic system. People's data privacy can be guaranteed, and the data crowdsourcing model promotes the advancement of AI models. Numerous open-source AI resources are available for users, and shared computing power can be obtained at a lower cost. With the help of decentralized collaborative crowdsourcing mechanisms and open AI markets, a fair income distribution system can be achieved, thereby encouraging more people to drive the advancement of AI technology.

In the Web3 scenario, AI can have a positive impact across multiple tracks. For example, AI models can be integrated into smart contracts to enhance work efficiency in various application scenarios, such as market analysis, security detection, social clustering, and more. Generative AI not only allows users to experience the role of an "artist," such as creating their own NFTs using AI technology, but also creates rich and diverse gaming scenarios and interesting interactive experiences in GameFi. The rich infrastructure provides a smooth development experience, allowing both AI experts and newcomers looking to enter the AI field to find suitable entry points in this world.

2. Analysis of the Web3-AI Ecological Project Landscape and Architecture

We mainly studied 41 projects in the Web3-AI track and categorized these projects into different levels. The classification logic for each level is shown in the figure below, including the infrastructure layer, intermediate layer, and application layer, each of which is further divided into different sections. In the next chapter, we will conduct a depth analysis of some representative projects.

Web3-AI Landscape Report: Technical Logic, Scenario Applications and In-depth Analysis of Top Projects

The infrastructure layer covers the computing resources and technology architecture that support the entire AI lifecycle, the middle layer includes data management, model development, and verification reasoning services that connect the infrastructure with applications, while the application layer focuses on various applications and solutions directly aimed at users.

Infrastructure Layer:

The infrastructure layer is the foundation of the AI lifecycle. This article classifies computing power, AI Chain, and development platforms as part of the infrastructure layer. It is the support of these infrastructures that enables the training and inference of AI models, presenting powerful and practical AI applications to users.

  • Decentralized computing network: It can provide distributed computing power for AI model training, ensuring efficient and economical utilization of computing resources. Some projects offer decentralized computing power markets, where users can rent computing power at low cost or share computing power to earn profits, represented by projects such as IO.NET and Hyperbolic. In addition, some projects have derived new gameplay, such as Compute Labs, which proposed a tokenized protocol, allowing users to participate in computing power leasing in different ways by purchasing NFTs that represent physical GPUs.

  • AI Chain: Utilizes blockchain as the foundation for the AI lifecycle, enabling seamless interaction of AI resources on-chain and off-chain, and promoting the development of industry ecosystems. The decentralized AI market on the chain can trade AI assets such as data, models, agents, etc., and provides AI development frameworks and supporting development tools, with representative projects like Sahara AI. AI Chain can also facilitate advancements in AI technologies across different fields, such as Bittensor promoting competition among different AI types through an innovative subnet incentive mechanism.

  • Development platforms: Some projects provide AI agent development platforms, and trading of AI agents can also be realized, such as Fetch.ai and ChainML. One-stop tools help developers more conveniently create, train, and deploy AI models, represented by projects like Nimble. These infrastructures promote the widespread application of AI technology in the Web3 ecosystem.

Middleware:

This layer involves AI data, models, as well as reasoning and verification, and using Web3 technology can achieve higher work efficiency.

  • Data: The quality and quantity of data are key factors affecting the effectiveness of model training. In the Web3 world, crowd-sourced data and collaborative data processing can optimize resource utilization and reduce data costs. Users can have autonomy over their data and sell their own data under privacy protection to avoid their data being stolen and exploited by malicious merchants for high profits. For data demanders, these platforms offer a wide range of choices at extremely low costs. Representative projects like Grass utilize user bandwidth to scrape web data, and xData collects media information through user-friendly plugins, supporting users to upload tweet information.

In addition, some platforms allow domain experts or ordinary users to perform data preprocessing tasks, such as image labeling and data classification, which may require specialized knowledge for financial and legal data processing. Users can tokenize their skills to achieve collaborative crowdsourcing of data preprocessing. For example, the AI marketplace like Sahara AI has data tasks from different domains that can cover multi-domain data scenarios; while AIT Protocol labels data through human-machine collaboration.

  • Models: In the AI development process mentioned earlier, different types of requirements need to match suitable models. Common models for image tasks include CNN and GAN, while the Yolo series can be chosen for object detection tasks. For text-related tasks, common models include RNN and Transformer, as well as some specific or general large models. The depth of models required for tasks of varying complexity is also different, and sometimes model tuning is necessary.

Some projects support users in providing different types of models or collaboratively training models through crowdsourcing, such as Sentient, which allows users to place trusted model data in the storage layer and distribution layer for model optimization through modular design. The development tools provided by Sahara AI come with advanced AI algorithms and computing frameworks, and have the capability for collaborative training.

  • Inference and Verification: After the model is trained, it generates a model weight file that can be used for classification, prediction, or other specific tasks. This process is called inference. The inference process is usually accompanied by a verification mechanism to verify whether the source of the inference model is correct and whether there are any malicious behaviors. In Web3, inference can often be integrated into smart contracts, and by invoking the model for inference, common verification methods include technologies such as ZKML, OPML, and TEE. Representative projects such as the ORA on-chain AI oracle (OAO) have introduced OPML as a verifiable layer for AI oracles, and their official website also mentions their research on ZKML and opp/ai (ZKML combined with OPML).

Application Layer:

This layer is mainly user-facing applications that combine AI with Web3, creating more interesting and innovative gameplay. This article mainly整理了 projects in several areas such as AIGC (AI generated content), AI agents, and data analysis.

  • AIGC: Through AIGC, it can be extended to NFT, gaming and other tracks in Web3. Users can directly generate text, images, and audio through prompts (the keywords provided by users), and even generate custom characters in games according to their preferences.
SAHARA-1.08%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Repost
  • Share
Comment
0/400
SerLiquidatedvip
· 3h ago
Another AI air pancake is here.
View OriginalReply0
RumbleValidatorvip
· 08-12 18:42
This trap of KPI ultimately cannot surpass the rigid conditions of the node consensus mechanism. Those who understand will naturally understand.
View OriginalReply0
MevHuntervip
· 08-12 18:37
Can blind mergers make money, or is it just creating concepts?
View OriginalReply0
BearMarketHustlervip
· 08-12 18:26
It's more reliable to make money by directly washing dishes.
View OriginalReply0
TokenSherpavip
· 08-12 18:12
ngl this web3-ai hype feels like 99% marketing bs... show me the actual governance data smh
Reply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)