Predicting air quality through machine learning

0


[ad_1]

Using statistical methods to estimate or predict future behavior of a phenomenon is common practice in many disciplines such as healthcare, retail, auto insurance, and customer relationship management. The goal of these methods is not necessarily to predict the exact outcome, but to determine the likelihood of different outcomes, as one can then prepare accordingly. Machine learning has further strengthened such methods and enables them to become even more accurate through the use of more data and additional calculations.

However, accessing such data can be problematic in different cases for three main reasons:

  1. Volume: It can be very expensive to carry such information as it can put a strain on network resources.
  2. Data protection: the information collected may be sensitive in relation to data protection; Any process that has access to this data is exposed to specific details from different people.
  3. Legislation: In certain countries, data about the members of that country may not be transferred abroad for legal reasons.

Predictive models require a lot of data, which can be problematic because such data is expensive to store and the transmission of such data can place significant network loads.

What if there was a method that enabled predictive models to be trained without having to transmit the required data in its original raw form.

To meet this challenge, we worked with Uppsala University in Sweden as part of a Computer science project course that’s part of their curriculum. Student projects that enable us to develop certain fields of technology and at the same time maintain important relationships with science are a regular part of our research work.

Air quality prediction

This year we decided to turn our attention to predicting air quality. Obviously, poor air quality has serious implications for people’s health. If we can predict air quality, we can tailor our behavior at different levels, from individual behavior to communities to nations and even worldwide. One example is Beijing, where coal factories shut down when the air quality is poor.

Air quality prediction is a challenging problem because air quality can vary significantly from place to place – from quiet residential areas and parks to busy streets and industrial areas. We also need to consider atmospheric patterns like rain, barometric pressure, temperature, etc. that affect the amount of each pollutant in the air. The data collected can be used beyond air quality studies; We can seek predictive methods that can help us proactively determine various measures we can take to improve air quality and / or protect sensitive groups from its effects.


Put predictive models to the test

The aim of the project was to deviate from the use of centralized data – large aggregates of data from several air quality measuring stations. This is the common approach to training supervised machine learning models, but it requires the transmission and aggregation of large amounts of raw data.

Instead, the students examined federated learning, which makes it possible to train a machine learning model at each station and then combine such models with federated averaging.


Central vs. federated model

Our article Privacy Conscious Machine Learning with Low Networking Requirements describes the benefits of federated learning in telecommunications. Since only the parameters of the prediction models are transmitted, this can reduce the volume of traffic in the network.

As part of this project, we envisioned a decentralized structure consisting of several air quality stations, in which each station collects data for a specific area, has computing functions that enable it to train a prediction model with locally collected data and with different air to communicate high quality stations.

Since such a setup does not yet exist, the students simulated it using measurements collected by the Swedish Meteorological and Hydrological Institute (SMHI). Although it was a centralized data set, the students divided it up per weather station (Stockholm E4 / E20 Lilla Essingen, Stockholm Sveavägen 59, Stockholm Hornsgatan 108 and Stockholm Torkel Knutssonsgatan) and trained four individual models, which were later aggregated using federated averaging.

The validation of results always requires a basis for comparison. In this case, a powerful centralized model was developed to validate against the federated models. Each student worked on the same training / test / validation dataset but explored it in different ways, using different functions and different machine learning architectures. Testing such models in parallel and evaluating them based on their accuracy – Symmetric Mean Absolute Percentage Error (SMAPE) and Mean Absolute Error (MAE) to be precise – allowed students to cover a wide range of different settings and to produce a powerful, centralized model.

Result

10 input functions were used as input for the machine learning model trained by the students:

Input functions for the air quality table2.jpg

Various models have been implemented including Long Short-Term Memory Networks (LSTM) and Deep Neural Networks (DNNs) to predict the next 1, 6 and 24 hours.

In the centralized case, the models aimed at predicting the next hour performed better on average than those aimed at predicting the next day. The values ​​ranged from 0.282 to 0.5214 SMAPE and 0.22 to 0.47 MAE.

On the federated model side, similar MAE values ​​were observed, indicating that the decentralized structure we originally envisioned could be supported by decentralized training techniques such as federated learning.

If you want to go into the details, have a look at the project reports of the two project groups. You can find everything you need to know about implementation on ours Decentralized monitoring and forecasting of air quality GitHub.

Round off

Ericsson Research and Uppsala University have a long history of working together on the Computer Science Project Course, where we, the industry, can introduce a group of brave students to a challenging problem. Under this premise, our advice in combination with the supervision by the university enables the students to organize and tackle this challenge themselves. This usually means adopting the SCRUM way of working, putting a lot of effort into developing a working prototype, having fun at a social event, and finally sharing the code base and project report to make that work available to others.


team meeting

But this year was different. Due to the COVID-19 pandemic, the entire course had to be handled remotely. This was an interesting challenge as the students were not in the same room and like to work side by side on the assignment and get to know each other while whiteboarding. Instead, they had to do everything remotely, including the social event. To make this productive, the teaching assistants ran a small competition – a mini version of Kaggle that allowed students to try different hypermeters while tuning their prediction models.

It’s great to see how techniques like federated learning can help create a more sustainable planet, not only by simplifying and improving the training process of a machine learning model and its overall lifecycle management, but also by improving the quality of people’s lives through air quality forecast. We await further applications of federated learning and other techniques that will contribute to this end.

Learn more

Read Project report from group 1

read this Project report from group 2

You can find out details about the implementation on our Decentralized monitoring and forecasting of air quality GitHub.

Find out how ICT, including AI, is helping to create a sustainable future.

[ad_2]

Share.

Leave A Reply