🎉 Gate xStocks Trading is Now Live! Spot, Futures, and Alpha Zone – All Open!
📝 Share your trading experience or screenshots on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 July 3, 7:00 – July 9,
Is it the time for Web3 to showcase its true capabilities after AI's "downsizing"?
Author: Haotian
Recently observing the AI industry, I have noticed an increasingly "sinking" change: evolving from the mainstream consensus of concentrating on computing power and "large" models, a branch leaning towards local small models and edge computing has emerged.
This can be seen from Apple's Intelligence covering 500 million devices, to Microsoft's launch of the Windows 11 dedicated model Mu with 330 million parameters, and to Google DeepMind's robot "offline" operations, etc.
What differences will there be? Cloud AI competes on parameter scale and training data, with the ability to burn money being the core competitiveness; local AI competes on engineering optimization and scenario adaptation, making further progress in privacy protection, reliability, and practicality. (The hallucination problem of major general models will severely affect the penetration of vertical scenarios.)
This actually presents a greater opportunity for web3 AI. Originally, when everyone was competing on "generalization" (computing, data, algorithms), it was naturally monopolized by traditional giant companies. Trying to compete with Google, AWS, OpenAI, etc. by applying the concept of decentralization is simply wishful thinking, after all, there is no resource advantage, no technical advantage, and even less user base.
But in the world of localized models + edge computing, the situation facing blockchain technology services is quite different.
When AI models run on user devices, how can we prove that the output results have not been tampered with? How can we achieve model collaboration while protecting privacy? These questions are precisely the strengths of blockchain technology...
I have noticed some new projects related to web3 AI, such as the data communication protocol Lattica launched by @Gradient_HQ with a recent zero investment of 10M from Pantera, aimed at addressing the data monopoly and black box issues of centralized AI platforms; @PublicAI_HeadCap, a brainwave device, collects real human data to build an "artificial verification layer" and has already achieved 14M in revenue; in fact, they are all trying to solve the "trustworthiness" issue of local AI.
In one sentence: Decentralized collaboration will only shift from concept to necessity when AI truly "sinks" into every device?
#Web3AI projects should consider how to provide infrastructure support for the localized AI wave rather than continuing to compete in the generalized track.