Melanoma Detection, Bank Customer Confidence and Welding Control
Researchers from MIT devise a system to analyze wide-field images with DCNNs that allows efficient detection of skin cancer early on.
Melanoma is a malignant tumor that appears on the skin. Responsible for more than 70% of skin cancer-related deaths globally, its incidence has risen every year since 1979. Due to the fact that the early-stage identification of such tumors can be done by visual inspection, their detection using Computer Vision techniques has been wildly popular in the last 5 to 8 years.
The rhetoric behind the development of such solutions seems sound initially. In fact, when melanoma is diagnosed while still confined to the outer layers of the skin, the 5-year survival rate is approximately 98%. The problem arises when these types of solutions are made available to the general public via mobile apps, evolving without regulation or oversight and potentially driving patients to delay clinical consultations. The large majority of Machine Learning models trained to detect Suspicious Pigmented Lesions (SPLs) leverage the ISIC database, which presents in-focus macro images of pigmented skin lesions. While they often show high accuracy rates on this benchmark dataset, the acquisition method for mobile phones is completely different!
The challenge is two-fold. First, the cameras found on mobile phones are often wide-field cameras, and hence the captured images often don't fit into the distribution that is found on benchmark datasets models are trained with. Second, the models are usually trained to detect if one picture contains melanoma or not, whereas wide-field images often contain a high volume of pigmented lesions that all need to be evaluated for potential biopsies.
Recently, researchers from MIT, Cambridge, and Harvard University have developed a pipeline leveraging deep convolutional neural networks (DCNNs) to detect and classify SPLs through the use of wide-field photography.
How it works:
- A wide-field image is acquired by a patient's primary-care physician
- The system automatically extracts all pigmented skin lesions observable in the image
- The trained DCNN determines the suspiciousness of individual pigmented lesions and marks them accordingly
- Results are displayed in a heatmap format to show extracted features, allowing further assessment by the physician.
Soenksen, the paper's first author, explains that “Early detection of SPLs can save lives; however, the current capacity of medical systems to provide comprehensive skin screenings at scale are still lacking.”
Contrary to previous research performed for this application, Soenksen et al.'s model was trained using wide-field images. More specifically, they used 20'388 images from 133 patients in Madrid's Hospital Gregorio Marañón. They were able to train their system to achieve 90.3 percent sensitivity in distinguishing suspicious from non-suspicious lesions. The main breakthrough is the system's capacity to detect and separate all the pigmented lesions from a single imaging, reducing the immense burden of having to capture one image per lesion.
Why it matters
The early detection of melanoma improves prognoses, which can truly save lives. Above that, early-stage identification can lead to a 20-fold reduction in treatment cost. While consumer applications exist, efficient clinical tools for the detection of SPLs are mostly absent.
This method could allow for rapid and accurate assessments of pigmented lesion suspiciousness within a primary care visit and could enable improved patient triaging, utilization of resources, and earlier treatment of melanoma.
Soenksen et al.
“Our research suggests that systems leveraging computer vision and deep neural networks, quantifying such common signs, can achieve comparable accuracy to expert dermatologists,” Soenksen explains. “We hope our research revitalizes the desire to deliver more efficient dermatological screenings in primary care settings to drive adequate referrals.”
A recently published IEEE Playbook helps define the value of trusted AI systems in the financial Industry
Data is the fuel of the AI revolution. Without data, there would be no insights to distill, no information to leverage, and no processes to automate. Therefore, it comes as no surprise that data is currently being accessed, shared, and utilized in various in diverse ways across the world. That's a good thing, no?
The issue with the increasing rate of data creation and usage across the globe is the lack of unified policy, technology, or cultural guidelines. In the financial industry, this data and the tools built around that data are leveraged to provide products and advice that directly impacts the lives of customers. Examples include, but are not limited to, credit decision-making, risk management, and personalized banking solutions.
There is an important need for guidelines that incorporate ethics, trust, and fairness into these tools. There is no future for AI in these industries without setting a fair, transparent, and accountable environment in order to build customer confidence in these novel systems.
Earlier this month, IEEE (Institute of Electrical and Electronics Engineers) published its Finance Playbook for AI Ethics. Its goal is to provide a road-map for trusted data and Artificial Intelligence Systems (AIS) in financial services. Written by fifty thought-leaders in the industry, the playbook provides a theoretical framework for implementing responsible data and AIS in financial institutions.
The playbook gives an in-depth and concise overview of what the high value AIS use-cases are in the finance industry.
More importantly, the writers also gave tremendous insights with respect to what the key ethical concerns are for each one of these use-cases. They implicitly outline these concerns as:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
That's great, but how do we go about mitigating those concerns?
In the financial industry, there are three pillars on which successful and ethical AI Systems can rest: (1) People, (2) Process, and (3) Technology.
What seems to be the predominant factor for converging AI ethics into systems is clearly the Process pillar. In fact, standards and certifications are starting to show up around the globe such as the IEEE 7010 standards family and The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS). The objective is to allow organizations to certify governance and technical initiatives, which helps build customer confidence.
Why it matters
In recent years, we have seen trust in technology decline, as can be observed in the 2021 Edelman Trust Barometer. Financial services are particularly distrusted, which is very worrying for the industry.
Therefore, it is more important than ever to rebuild customer trust and stand out in comparison to competitors with regards to customer confidence.
It is critical to remember that AI technology, in the large scale of things, is in its early stages. However, the impacts that AI systems are having worldwide, while very heterogeneous, are very real. The design and operations of any AI system in its full value chain and systems development needs to be prioritized.
The above-mentioned Playbook is available here.
Farm equipment manufacturer John Deere teamed up with Intel to detect porosity in their welding process using Computer Vision technology.
Large manufacturing companies use robotic welders to assemble metal parts (Gas Metal Arc Welding or GMAW). A human operator is usually in charge of multiple robotic welders, making it difficult to spot potential errors or inconsistencies. In particular, the most common error made by robotic welders is porosity. In other words, the cooling process of a weld can sometimes form little bubbles of gas inside the joint.
In the case of John Deere, a equipment maker that produces machines for farming, forestry, and construction, hundreds of robotic arms consume millions of weld wire pounds every year. With qualified human inspectors in short supply, companies like John Deere are having increasing difficulty staffing up their factories.
John Deere teamed up with Intel to implement a solution based on Artificial Intelligence to detect faulty welded joints.
The developed model was trained with footage from a ruggedized camera placed on the robotic welder. Interestingly, the videos were lit only by using the welding sparks, solving a common lighting problem for quality control solutions leveraging Computer Vision technology. When a faulty weld is detected, the robot stops functioning and a human operator is notified.
The model reaches an impressive testing accuracy of more than 97%. The John Deere is particularly pleased with the solution as it allows them to innovate by using modern technology to solve age-old problems.
Welding is a complicated process. This AI solution has the potential to help us produce our high-quality machines more efficiently than before. The introduction of new technology into manufacturing is opening up new opportunities and changing the way we think about some processes that haven’t changed in years.
Andy Benko, Quality Director at John Deer Construction & Forestry
Why it matters
Defects like porosity often go unnoticed until later in the manufacturing process. This is quite problematic as the part may have already been joined to other machinery. A single faulty weld can cost up to $10'000.
Using real-time technology to detect and deal with porosity is saving John Deere a lot of time and money. Above that, they are able to drastically reduce the number of non-conformance reports.
Quality control solutions that leverage Computer Vision techniques are quickly gaining ground. Is your company struggling with hiring expensive line operators and detecting defects in your manufacturing facilities?
Visium offers the technical skills and past experiences required to augment your quality control operations. Let's create something amazing together!