The relevance of data today is well known: it fuels emerging technologies such as Artificial Intelligence or Machine Learning, improves decision making, generates ultra-targeted advertising, and so on. Thus, 78% of IT decision makers agree that data collection and analysis has the potential to change the way their company does business in the next 1-3 years.
However, data processing has a major drawback, namely the exposure of users’ privacy. To avoid this evil, regulations such as the GDPR impose the protection of personal data as a maxim on data controllers.
The most recent studies show that companies have made significant efforts to safeguard data privacy, although these have not been sufficient. In 2021, 95% of business leaders reported having strong or very strong data protection measures in place, but 62% agreed that their companies should do more. From the users’ perspective, the results are no more encouraging, given that they do not trust that they are guaranteed real protection. Eighty-six per cent said they had growing concerns about data privacy, with around half fearing that their data could be hacked (51%) or sold (47%).
The truth is that there are still some gaps in providing maximum protection for data processing. This is the case, for example, with data analytics and data sharing. Unlike when they are stored or transmitted, data sets are exposed when they are manipulated, as their use has so far been unfeasible with solutions such as encryption.
Thus, one of the main challenges facing companies right now is how to perform data analysis while protecting this data and respecting the privacy requests of the individuals whose data it concerns. It is in this context that the recent strategic categorisation of so-called Privacy-Enhancing Computations (PECs), a set of technologies that allow data to be analysed and shared without exposing its content to third parties, thus securing the data while it is being used, has to be understood.
PECs have been applied in the public and academic sector for years. Originally, they referred to a group of relatively simple technologies related to information masking, such as anonymisation or pseudonymisation techniques, which avoid the identification of the subjects concerned.
Earlier techniques, on the other hand, were not entirely effective; for example, by combining them with additional datasets, reconstruction of the original database can be carried out, with the possibility of re-identifying the subjects. Now, however, with the growing interest, PECs are reaching the level of refinement necessary to meet the level of demand required by companies. This branch of technology is currently undergoing a very high rate of development, above the average rate of improvement of other technologies. According to data from the Massachusetts Institute of Technology’s search portal, innovation in PEC is growing at 178% per year, second only to cloud computing technology.
As a result of recent advances in privacy enhancements, new, more sophisticated and effective PEC technologies have emerged and are now gaining attention and beginning to be applied to practical projects. A recent World Economic Forum report identifies and differentiates 5 emerging PEC techniques:
Judging by the pace of innovation of the PECs, the emerging techniques described above will consolidate their degree of perfection, and the introduction of some other even more complex and effective novelty cannot be ruled out.
For companies, their adoption will lead to a substantial improvement in data protection, since, as indicated above, they focus on the analytical part, when data is most exposed, and to which a satisfactory solution has not yet been found. This will lead to two potential benefits:
Furthermore, the incorporation of techniques such as differential privacy or homomorphic encryption and its subcategory, multiparty computation, provides the opportunity to share data sets and allow other parties to operate on them without exposing their content. Precisely one of the biggest risks in the relationship with third parties is the breach of data privacy. Studies such as the one by Forrester indicate that the costs derived from a data breach increase by an average of 370,000 dollars when caused by a third party. Implementing these innovations will therefore mean working securely in multiple, untrusted environments, consolidating three current practices:
Finally, decentralised ingenuities such as federated analytics will reduce companies’ internal access to the data they generate. Others, such as zero-knowledge testing, will minimise the information provided, but without losing its value. This will mean companies with the same or greater capacity to collect and analyse information, but with less awareness and depth of individual user data. Combined with the above technologies, the result is a context in which the value of data is maximised, while it is kept hidden from those who handle it, whether they are the data controllers or their partners.
Because of the improved protection and the other potential benefits, some reports estimate that the adoption of emerging PEC techniques will be rapid: by 2025, 50% of large enterprises will adopt PECs to securely process their data.
There are a number of large companies that are investing in and starting to apply the PEC techniques described above. Some of them are:
To conclude, it can be said that PECs have undergone a substantial evolution in a short period of time. A number of emerging techniques have been added, displacing the original privacy-enhancing techniques, which hardly offered a satisfactory answer. The new ones, on the contrary, offer more optimised protection with various kinds of solutions: analysis on encrypted data, generation of a trusted environment for data sharing, and decentralised analysis.
The picture that opens up for the new PECs is a context in which the value of data is maximised, while it is kept hidden from data handlers and their partners, so that companies are getting closer and closer to analysing all kinds of data without violating users’ privacy.
The next step is the adoption of this set of innovations, which is happening now in large companies, which are testing and refining their own models. In five years’ time, probably more than half will have fully integrated them into their Big Data processes.