COVID Detection, AI Misuse Retaliation, & Distribution Shift
Detecting COVID using forced cough recordings
The releases of an infection detection using cough recording as well as a COVID-19 Simulator are at the forefront of AI developments tackling the virus' spread
With the second wave of the Coronavirus disease in full swing all around the world, researchers are leveraging data-driven methods in an effort to reduce virus spread. The ability to use aggregated data from the first wave allows for a diverse set of new ideas and accuracy improvements in existing solutions. Recently, we covered a virus identification technique using Computer Vision from researchers at Oxford University.
As you can imagine, the difference in coughs from COVID-19 negative and positive patients cannot be distinguished with the human ear. However, a model based on the Convolutional Neural Network architecture is able to distinguish them with high accuracy. Indeed, it identified 98.5 % of coughs from patients who tested positive, including 100 % of coughs from asymptomatic.
Interestingly, the research groups' prior work included similar algorithms for the identification of Alzheimer's disease. Unfortunately, the code does not seem to be made available to the public.
In other news, AWS recently open-sourced a COVID-19 Simulator and Machine Learning Toolkit. The goal is to enable data scientists to better model and understand disease progression in a given community over time. This is done by modeling the disease progression for each individual using a finite state machine. Furthermore, the simulator allows for testing the impact of various 'what-if' intervention scenarios. The code is available here.
With regard to the cough detection tool, the team is currently looking into deploying the model into a user-friendly app. This app, if approved by the FDA, could lead to the adoption of many potential use cases. For instance, daily country-wide screenings, outbreak monitoring, and test pooling candidate selection.
In fact, it would give access to a free, convenient, and non-invasive pre-screening tool. Patients could log in every day, forcibly cough into their phone's microphone, and get information on whether they are possibly infected and hence should take a formal test. This reminds us that for data-driven solutions to work in a real-life setting, the insights must be actionable.
As they propose in the paper, “Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved.”
The AWS COVID-19 simulator aims to encourage data-driven decisions with regard to restrictions.
Why it matters
This research shows that using data can lead to a plethora of different solutions to a unique problem. With a complex problem such as a pandemic, many factors are at play. The large majority of these factors can be monitored, tracked, and modeled in some way, shape, or form.
Here, the unconventional idea of using cough recordings for disease detection leads to a non-invasive diagnostic tool that is essentially free, can yield quasi-unlimited throughput, real-time results, and longitudinal monitoring.
AI in the hands of the oppressed
US citizens use Computer Vision models to identify abusive law enforcement officials
Developing and deploying Machine Learning solutions is not something anyone can do. Not only do you need the technical know-how, you need the data. For a long time, only large tech companies were able to deploy large scale, robust, and high-stake Machine Learning solutions. Recently, some have proven that one can use the open-source tools, knowledge, and software elements to build a Machine Learning solution on their own.
Private citizens have been using face recognition software to identify abusive law enforcement officials. Using publicly available models and crowd-sourced datasets, their solutions are able to identify police officers in photos and videos. Christopher Howell from Portland, Oregon is one of these individuals. Using images from the news, social media, and a public dataset Cops Photo, he developed a model that can recognize about 20% of the city's police force.
From Belarus, which is in the midst of a highly debated presidential election, an individual called Andrew Maximov has designed a similar solution to identify mask-wearing police officers. He displays the solution in a YouTube clip.
In some jurisdictions, police officers are not required to display their name tags and are allowed to wear face masks. Moreover, the number of protests has highly increased in the past decade. In some countries and circumstances, these protests can become incredibly violent. These examples show that in the hands of citizens, AI tools can increase police accountability and stem abuse. On the other hand, the tool could also be used with malicious intent, harassing officers who've done nothing wrong. Worse, the solution could lack in performance accuracy when compared to professional systems, identifying the wrong officers.
Why it matters
Face recognition is a double-edged sword in a politically polarized world. This shows that adequate governance with respect to the democratization of Artificial Intelligence is essential. The use of these tools by individuals, companies, or governments, comes with immense responsibilities.
Tackling group distribution shift in production
Researchers from Berkley have developed Adaptive Risk Minimization, a meta-learning approach for tackling group shift
There is a huge difference between Machine Learning in Research and in Industry. In research, an enormous amount of importance is put on the model's performance. Unfortunately, using benchmarks and standardized datasets does not reflect the use of these solutions in real life.
In fact, so much is needed in addition to the model training and prediction scripts that there has been a recent boost in Dev Ops for Machine Learning solutions (also referred to as MLOps) led by Allegro AI, MLflow, Weights & Biases, cnvrg.io, DataRobot, DVC, Snorkal AI, and many others. The specific features of all of these platforms differ in some way, shape, or form. However, what brings them all together is the idea of standardized and collaboration-friendly tools for monitoring, testing, and versioning ML products (as well as their data, artifacts, etc.) in both development and production.
One very important aspect is the distribution shift in the production setting. Let us use the example of handwriting transcription. What happens when end users have different handwriting to the writers the training data was taken from?
This week, a team of researchers from Berkeley Artificial Intelligence Research (BAIR) published a paper proposing a new meta-learning approach: Adaptive Risk Minimization. The method aims to tackle group distribution shifts, which occurs when the training and testing data are not drawn from the same underlying distribution. This phenomenon can be explained by temporal correlations, specific end users, or many other factors. More importantly, it occurs in almost all practical Machine Learning applications. Hence, a fundamental assumption is violated. Real-life results may therefore not reflect the testing accuracies obtained during the training phase.
The method displayed in the paper assumes that training data is distributed in groups. Then, the data that is processed in production can be modeled as a shift in the group distribution. The use of meta-learning enables adaptable model learning that can adapt to this shift at test time.
Let us go back to the example mentioned above. If the model is analyzing handwriting from a user, it can use batched letters (all letters used in a sentence or paragraph) to adapt to learned potential distribution shifts. For instance, the user writes the number 2 without a loop, indicating that a shape that could potentially be either the letter a or the number 2 using classical approaches will most likely be the letter a. An illustration is provided above.
The authors believe their method and its empirical results "convincingly argue for further study into general techniques for adaptive models". The outcomes and consequences of AI solutions in real-life gain an increasing amount of importance. Indeed, new subject areas for the upcoming Conference on Computer Vision and Pattern Recognition include Deepfake detection, Ethics in Vision, Fairness, Accountability, Privacy, and Dataset bias. It seems clear that adaptive models will become crucial for Machine Learning to achieve their potential in complex, real-world environments.
Why it matters
Handling data from new users is far from the only potential application for adaptive models. The authors state that "in an ever-changing world, autonomous cars need to adapt to new weather conditions and locations, image classifiers need to adapt to new cameras with different intrinsics, and recommender systems need to adapt to users’ evolving preferences". Humans have clearly demonstrated that they can adapt by inferring information using examples from test distributions. Will humans be able to develop methods that can allow machine learning models to do the same?