Deep generative models could offer the most promising developments in AI


Missed a session at the Data Summit? Watch On Demand here.

This article is by Rick Hao, Lead Deep Tech Partner at pan-European VC Speedinvest.

With an annual growth rate of 44%, the AI ​​and machine learning market is attracting continued interest from executives across industries. with some projections Estimating that AI will increase the GDP of some local economies by 26% by 2030, the rationale for the investment and hype is easy to see.

Among AI researchers and data scientists, one of the most important steps to ensure AI delivers on the promise of increased growth and productivity is to expand the breadth and capabilities of the models available to organizations. And high on the agenda is the development, training and deployment of Deep Generative Models (DGMs) – which I consider to be some of the most exciting models slated for industrial use. But why?

What are DTMs?

You’ve probably already seen the results of a DTM in action – they’re actually the same type of AI models that produce deepfakes or deepfakes impressionist art. DTMs have long excited academics and researchers in computer labs because they bring together two very important techniques that represent the confluence of deep learning and probabilistic modeling: the generative model paradigm and neural networks.

One of two main categories of AI models, a generative model, as the name suggests, is a model that can take a data set and generate new data points based on the inputs it has received so far. This is in contrast to the more commonly used – and much easier to develop – discriminative models, which look at a data point in a data set and then label or classify it.

The “D” in “DGM” refers to the fact that they are not only generative models, but also use deep neural networks. Neural networks are computer architectures that give programs the ability to learn new patterns over time – what makes a neural network “deep” is an increased level of complexity created by multiple hidden “layers” of reasoning between inputs to a model and the output of a model is offered. This depth gives deep neural networks the ability to work with extremely complex data sets with many variables.

Taken together, this means that DTMs are models that can generate new data points based on the data fed into them, and that can handle particularly complex datasets and subjects.

The chances of DTMs

As mentioned above, DTMs already have some notable creative and imaginative uses, such as deepfakes or art generation. However, the potential full range of commercial and industrial applications for DTMs is vast and promises to transform a multitude of sectors.

For example, consider the problem of protein folding. Protein folding – the discovery of the 3D structure of proteins – allows us to discover which drugs and compounds interact with different types of human tissue and how. This is crucial for drug discovery and medical innovation, but discovering how proteins ​​fold is very difficult because scientists have to dissolve and crystallize proteins ​​before analysis, which means that the entire process for a single protein can take weeks or months. Traditional deep learning models are also insufficient to address the problem of protein folding, as their focus is primarily on classifying existing datasets rather than being able to generate their own results.

In contrast, last year that of the DeepMind team AlphaFold succeeded in reliably predicting the folding of proteins solely on the basis of data on their chemical composition. With the ability to generate results in hours or minutes, AlphaFold has the potential to save months of lab work and significantly accelerate research in almost any area of ​​biology.

We’re also seeing DTMs popping up in other areas. Last month, DeepMind released AlphaCode, a code-generating AI model that has successfully outperformed the average developer in tests. And the applicability of DTMs can be seen in fields as distant as physics, financial modeling, or logistics: Through the ability to learn subtle and complex patterns that humans and other deep learning networks cannot discern, DTMs promise to do so in the To be able to achieve surprising and insightful results in almost any field.

The challenges

DTMs face some notable technical challenges, such as: train optimally (especially with limited data sets) and make sure they can give way consistently accurate outputs in real applications. This is a key driver of the need for further investment to ensure that DTMs can be widely deployed in production environments, thereby delivering on their economic and social promises.

Beyond the technical hurdles, however, a major challenge for DTMs lies in ethics and compliance. Due to their complexity, the decision-making process for DTMs is very difficult to understand or explain, especially for those who do not understand their architecture or operation. This lack of explainability can pose a risk that an AI model will develop unwarranted or unethical biases without the knowledge of its operators, which in turn will produce inaccurate or discriminatory results.

Furthermore, the fact that DTMs operate at such a high level of complexity means that their results run the risk of being difficult to reproduce. This difficulty in reproducibility can make it difficult for researchers, regulators, or the general public to have confidence in the results provided by a model.

To mitigate risks related to explainability and reproducibility, development teams and data scientists who want to leverage surfaces need to ensure they use and apply best practices when formatting their models recognized explainability tools in their assignments.

While DTMs are just beginning to make their way into production environments at scale, they represent some of the most promising developments in the AI ​​world. Ultimately, the ability to look at some of the most subtle and fundamental patterns in society and nature will make these models nearly prove transformative in every industry. And despite the challenges of ensuring compliance and transparency, there is every reason to be optimistic and excited about the future promise of DGMs for technology, our economy and society as a whole.

Rick Hao is Lead Deep Tech Partner at pan-European VC speed invest.

data decision maker

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.

If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers


Comments are closed.