Mobula Docs
Search…
⌃K

The decentralized data aggregator

In this section the components of Mobula's decentralized data aggregation Protocol will be detailed, respectively the collection, processing and broadcasting of data. The goal is to understand the concept and not to be able to integrate Mobula into an application - more technical documentation is available for developers for this purpose.

Collecting data

The first part of the data aggregation protocol is obviously the collection of the data. Data collection is done in two different ways - first, by a form accessible via smart-contract (written in Solidity for EVM-compatible blockchains) directly on-chain. Any user of an EVM-compatible blockchain can thus submit a processing request for a given crypto-asset by providing the necessary meta-data himself (he will provide, for example, the website, the logo, etc. for this given asset). Sending the form is subject to a fee (the amount to be charged is flexible and is controlled by the meta-protocol DAO) to avoid any flood, and the profit generated is subsequently redistributed to the members of the protocol DAO (for the distinctions between protocol and meta-protocol DAO, see part IV) involved in the second part of the protocol (Processing Data). This first way of collecting data is essential since it is the source of decentralization, however it is not perfect and needs to be completed by another way of collecting crypto-assets because it does not allow to collect data from all the important crypto-assets (it is conceivable that nobody takes the time to fill in a form for some assets). The second method is the following: the meta-protocol DAO will decide on a list of centralized aggregators to track and each listing of a new crypto-asset on one of these platforms will be subject to an automatic form submission. While the fact that one DAO decides on the list of aggregators makes this method slightly decentralized, it is important that each listing be submitted to the other two parties of the Protocol so that the protocol DAO still checks the consistency of each centralized listing. The rewards distributed to members of the Protocol DAO who verify this data will be taken from the Mobula treasury. This is how the aggregator will get the data that the other two parts of the protocol focus on.

Processing data

The second part of Protocol is arguably the most important and complex to implement : it's the data processing. Once the data is collected and sent through the form, it arrives in a waiting pool for the First Sort. This pool is accessible by any member of the protocol DAO of Rank 1 or higher. It is possible for each of these members to vote, for each request, 'Validate' or 'Reject' according to the quality of the data. Once the request has been judged a sufficient number of times (this number is flexible and determined by the meta-protocol DAO), the request is rejected if the number of positive votes is not sufficient (the acceptance rate is also determined by the meta-protocol DAO), or submitted to a council of Rank 2 members (if the number of positive votes is sufficient). This Rank 2 council has a veto right allowing it to reject a request even if it has been validated by the Rank 1 members. If it does not apply its veto right, the request is validated and added to the aggregator's database. Otherwise, the efficiency score of the Rank 1 members having considered a refused asset valid is decreased: this score, initialized to 0, and incremented or decremented according to performance, helps the hierarchy to gauge the quality of a Rank 1 member (more details in part IV). This is how the data processing is done in the Protocol: in a totally decentralized and reliable way. Now, let's tackle the transmission and storage of these data.

Broadcasting data

The final phase of the Protocol is probably the most technical: storing data in a decentralized way is not easy. Neither is making it accessible afterwards. However, there are solutions, and the Protocol has integrated them: as far as the storage of collected data is concerned, a smart-contract, written in Solidity and deployed on the majority of EVM-compatible blockchains (Ethereum, Binance Smart Chain, Polygon, Avalanche and Fantom for the moment), keeps in memory a mapping linking the addresses of the crypto-token to a hash. This hash allows, thanks to the IPFS protocol, to obtain the metadata of the requested crypto-token. The data is therefore stored on IPFS, a fully decentralized protocol. To access this data, a smart-contract function allows any user to obtain the IPFS hash of a crypto-token from its address - thus allowing access to the data directly on-chain.
Bob queries data for 0x2260fac5e5542a773aa44fbcfedf7c193bc2c599
The data contained in the IPFS file is in JSON, and contains the following information:
{
name: ‘Mobula’, symbol: ‘MOBL’,
description: ‘Mobula is the first decentralized data aggregtor’,
logo: ‘https://mobula.finance/logo.png’,
audit: ‘https://certik.com/mobula’,
website: ‘https://mobula.finance’,
chat: ‘https://t.me/MobulaFi’,
twitter: ‘https://twitter.com/MobulaFi’
}
Of course, the protocol is scalable, and new entries will be added and debated by the meta-protocol DAO. In summary, the Protocol allows to collect data directly on-chain through smart-contracts (which ensures a down-time of 0% and a possible integration by on-chain or even off-chain protocols), then to process them thanks to a hermetic and meritocratic DAO (which ensures a transparent and decentralized processing of the data), and finally to distribute them directly on-chain thanks to a smart-contract linked to IPFS (same as the first part, 0% down-time and no storage errors possible). However, it can be clever to note that everything relies on the quality of the protocol DAO (the quality of the processing done by the members and the quality of its decentralization). The purpose of the next section is to shed some light on its operation, in order to explain how the Protocol ensures these two qualities, allowing a rigorous data processing.