Artificial intelligence now seems an inescapable force that is already well established. According to McKinsey , 79% of the global workforce has already been exposed to generative AI at least once, and Hostinger notes that more than 40% of businesses are seeing a positive impact following the automation of their activities thanks to AI.
However, behind this meteoric rise of AI lies a grey area: the models are veritable black boxes, with workings known only to their creators.
No trust in opacity
The rationality and explicability of results are among the major concerns surrounding the use of artificial intelligence. Is the model used relevant? Are the data used to train the AI truly representative? What are the potential biases?
It is difficult to have confidence when the mechanisms of an AI model remain secret, especially in this age of transparency. If an AI proposes a specific medical diagnosis, how can healthcare professionals assess its relevance without understanding the underlying logic of the model?
Also worth considering are the filtering layers used by companies to regulate AI responses. These overlays in fact reflect the company's choices and orientations and can generate a number of cultural and political biases.
Finally, security remains a key issue. How is the information injected by AI software users used? Is the data used to train the models protected against the risk of leakage or malicious use?
The Web3 solution
Faced with this issue of trust and the pitfalls of centralising AI models, a solution is emerging: relying on the Web3 approach to build these models in a transparent, decentralised and distributed way.
This can be made possible by using two technologies. Blockchain on the one hand, to ensure data ownership, provenance and governance. Confidential computing on the other, which protects the confidentiality of data and models.
A Web3 solution combining these two technological disruptions, such as that offered by iExec, is ideal for transforming the way AI is designed and used.
When AI models are transparent, every aspect of how they work - from the data used for training to the algorithms behind their decisions - is accessible and reviewable by all.
With decentralised power, users, researchers and even the public can have a say in how data is used and models are built, preventing the decision from being left in the hands of a select few.
Finally, a distributed system allows for greater security and immutability of data and models. Instead of having a single point of vulnerability, a distributed system offers multiple layers of protection, safeguarding data and models from fraudulent modification or alteration.
This system both robust against corruption attempts, and distributed enough to be accessible by all, while still having an understandable and transparent logic , is deeply aligned with the interests of its users: giving the best results with full knowledge of potential biases as well as ensuring the confidentiality of exchanges.
The AI supported by Web3 could thus represent a major breakthrough, paving the way for artificial intelligence that is at once transparent, ethical and aligned with the interests of society.