AWS researchers are developing “TabTransformer” to bring the power of deep learning to data in tables

0

The most powerful AI systems have deep neural networks at their core. For example, Transformer-based language models such as BERT are typically the basis for natural language processing (NLP) applications. Applications that rely on data contained in tables have been an exception to the deep learning revolution, as methods based on decision trees have often outperformed.

Researchers at AWS have focused on developing TabTransformer, a brand new deep tabular data modeling architecture for supervised and semi-supervised learning. TabTransformer extends Transformers beyond natural language processing to tabular data.

TabTransformer can be used for classification and regression tasks with Amazon SageMaker JumpStart. The SageMaker JumpStart UI in SageMaker Studio and the SageMaker Python SDK allow access to TabTransformer from Python code. TabTransformer has attracted interest from individuals in various fields. It was also presented at the ICLR 2021 Weakly Supervised Learning workshop. Also, it has been added to the official repository of Keras, a well-known open-source software library for working with deep neural networks.

To create reliable data representations or embeddings for categorical variables that can take on a finite number of discrete values, such as the months of the year, TabTransformer uses Transformers. Numeric values ​​and other continuous variables are handled in parallel streams.

It uses NLP, pre-training a model with unlabeled data to learn a broad embedding scheme, and then fine-tuning it with labeled data to learn a specific task.

TabTransformer outperforms state-of-the-art deep learning algorithms for tabular data by at least 1.0 percent in mean AUC in studies of 15 publicly available datasets. the area under the receiver operating curve shows the false positive versus false negative rate. It also shows that it matches the effectiveness of tree-based ensemble models. DNNs often outperform decision tree-based models in semi-supervised situations where labeled data is limited because they can make better use of unlabeled data. TabTransformer demonstrated an average AUC increase over the main DNN benchmark of 2.1 percent using the breakthrough unsupervised pre-training method.

The contextual embeddings learned via TabTransformer are resistant to missing and noisy data features and offer greater interpretability, which we also show in the last section of our analysis. Below is a diagram of TabTransformer architecture. In investigations, researchers converted data types, including text, ZIP codes and IP addresses, into either numeric or categorical features using typical feature engineering approaches.

TabTransformer definitely paves the way to bring the power of deep learning to data in tables.

This Article is written as a summary article by Marktechpost Staff based on the research article ' TABULAR DATA MODELING VIA CONTEXTUAL EMBEDDINGS'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, github, AWS article.

Please Don't Forget To Join Our ML Subreddit
Share.

Comments are closed.