Material Generation, AI for Customer Support, and Stochastic Parrots
Using LSTMs for efficient material generation
Research from Sandia National Laboratories accelerates phase-field micro-structure evolution predictions using surrogate Machine Learning methods
Context
When creating new materials, it is extremely important to know how they will react for a wide variety of physical, chemical, and biological conditions. To grasp the evolution of a micro-structure's properties through these varying conditions, the phase-field method is used. It is a powerful and versatile computational approach for modeling the material's behavior. Unfortunately, high-fidelity phase-field models are computationally expensive. To require an actionable accuracy, one needs high-performance computing resources and sophisticated integration schemes.
What's new
In a recent publication, researchers from Sandia National Laboratories leveraged data from past phase-field simulations to predict the effect of a change in micro-structure design to the material's properties.
They started with dimensionality reduction using Principal Component Analysis, which enables the construction of a low-dimensional but faithful representation of the time evolution of the micro-structure. The researchers use a surrogate machine-learned model that is trained with an LSTM architecture. Using high-fidelity data from 300,000 time observations (5,000 simulations of 60 frames), they were able to achieve an impressive ARE (Absolute Relative Error) of 6.8%.
Their LSTM model is able to simulate thermodynamic processes (e.g. congealing of a cooling molten alloy) in only 60 ms compared to the usual 12 minutes. "As such, the computational efficiency of the LSTM model yields results 42,666 times faster than the full-scale phase-field method."

Why it matters
In recent years, the use of Machine Learning models has shown incredible results in augmenting scientific simulations and acquisition techniques. Most notably, DeepMin's AlphaFold solves the protein folding problem and DENSE, from researchers at Stanford, builds high accuracy emulators.
Many impressive advances are also seen in less publicized domains. For instance, Martin Weigert and colleagues leverage Machine Learning to push the limits imposed on fluorescence microscopy by optics, fluorophore chemistry, and maximum photon exposure allowed by living samples.
What's next
The pace of scientific discovery is ever-increasing. The application of Machine Learning to augment computational techniques as well as acquisition systems yields great promise for the fields of aerospace, energy, medicine, and others. Advanced materials are changing the world, and these new techniques are helping these innovations reach the market faster than ever before.
What can AI do for customer support?
AI is transforming customer service interactions, ensuring that all angles are covered in terms of customer support
Context
In recent years, customer service has taken center-stage. Shopping has become extremely customer-centric, providing shoppers with highly targeted and tailored experiences. Along with this shift, all eyes are directed at the biggest predictor of churn and customer lifetime-value: customer service. By transforming customer service interactions, tailored experiences are prepared to improve every aspect of your business including online customer experience, loyalty, brand reputation, preventive assistance, and even generation of revenue streams.

What's new
Customer service is going digital. There's no doubt about it. A Zendesk survey shows that excellent customer service "directly impacts long-term revenue" and "requires a wide range of channels". How can an organization ensure wide availability and quick responses at the same time? The answer is quite simple: the implementation of data-driven and automated solutions.
- Leveraging AI as a brand messenger
By developing a chat-bot that responds to customer input (text, selection, voice, etc.), companies are able to inform their customers and potential customers seamlessly. The bot is available 24/7 and helps customers easily obtain the information they need. This solution is gaining a lot of traction in industries such as fashion, tourism, food chains, e-commerce, as well as hotels and airlines.

- Ticket sorting
Implement an intelligent sorting algorithm to prioritize ticket queues for your customer service representatives. The Machine Learning algorithms allow organizations to use the tickets' features as inputs and yield an optimal order of resolution. The main benefit of such a solution is that a company can improve the quality and efficiency of its customer service by prioritizing important issues.
- Customer lifetime value prediction
While more of a back-office task, the prediction of customer lifetime value (CLV) has immense value. Using historical transaction information from retail and e-commerce stores, CLV is predicted for customers, old and new. The insights are immense they can help organisations guide their assortment and offering of products as well as marketing and promotional campaigns, which have an immense influence on revenue.
- Using Machine Learning for extra support
So much time is lost looking for information in an organisation. Long chains of emails, endless Google searches, and calls to diverse departments from multiple offices in varying time-zones, you name it! When time is essential, as it is in customer service, this becomes a business hurdle. By centralizing and giving structure to internal documents, Machine Learning algorithms are capable of searching through it and flagging the most relevant content. An internal Google? Not quite. But the data never lies, intelligent knowledge search for private data is the key to increasing the efficiency of customer service teams.
Why it matters
With the increasing demand for informative and timely customer service and its threat of churn, companies across the globe are starting to implement digital solutions. Whether these systems interact with the client or not, they leverage historical and public data sources to increase the efficiency of their customer support.
What's next
As AI helpers such as chat-bots and smart search platforms become the new norm, organizations are faced with a digital transformation challenge. Visium's expertise in providing customer support solutions in multiple and diverse formats allows organizations in Switzerland and abroad to tackle this challenge, head first.
The dangers of Large Language Models
Authors have raised global awareness on recent NLP trends and have urged researchers, developers, and practitioners associated with language technology to take a holistic and responsible approach.
Context
Large Language Models (LLMs) are NLP models that leverage large datasets with a huge amount of parameters on string prediction tasks. This means they predict the likelihood of a token (which can be a character, word, or string) given the previous or surrounding tokens. The goal is to train the model in an unsupervised manner initially and then fine-tune it for specific tasks. Most commonly, you will see them out in the wild outputting scores or string predictions when given text as input. Well-known examples are BERT and its derivatives, T-NLG, and GPT-3.
What's new
In recent years, LLMs have increased in size: meaning they are trained with larger amounts of data and parameters. In fact, the Transformer architecture yields continuously noticeable benefits from this size increase. For this reason, we can expect size of LLMs to increase as long as this correlates with better performance.

Evidently, the training of models on such large scales implies risks and costs that are not attributed to smaller models. These dangers are explored in a recent paper by researchers from the University of Washington and Black in AI and can be summarized as:
- Environmental cost: Training a BERT base model without fine-tuning is estimated to require as much energy as a trans-American flight (~1900 CO2e).
- Financial cost: in the past 6 years, the compute required to train the largest Deep Learning models has increased 300'000 fold. This raises questions about fairness, as only the world's top AI research institutions are capable of supporting such financial costs. Furthermore, most of these models are trained on the English language, and their use is not geared to those who could most benefit from it (e.g. marginalized communities).
- Training with large data: the data used for these models are often collected from the internet. As internet access and contributions are not evenly distributed, this poses a risk for the over-representation of younger people in more developed countries. Above that, the datasets are undocumented and too large to document post hoc. While documentation allows for potential accountability, undocumented training data perpetuates harm without recourse.
- Dangers of deploying LLMs at scale: human communication relies greatly on the implicit meaning that's conveyed with language. Text produced by LLMs will reproduce and even amplify the encoded biases in their training data.

For more details, click here to access the original paper.
Why it matters
The world's top researchers are putting in immense time and effort to work on LLMs.
As stated by the authors: "In order to mitigate the risks that come with the creation of increasingly large LMs, we urge researchers to shift to a mindset of careful planning, along many dimensions, before starting to build either datasets or systems trained on datasets. We should consider our research time and effort a valuable resource, to be spent to the extent possible on research projects that build towards a technological ecosystem whose benefits are at least evenly distributed or better accrue to those historically most marginalized."
What's next
Currently, methods are being developed to counter these costs and risks. For instance, a recent paper was published on increasing the efficiency of Transformers with Performers.
In general, the authors of "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" give six guidelines for future research:
- Considering Environmental and Financial Impacts.
- Doing careful data curation and documentation.
- Engaging with stakeholders early in the design process.
- Exploring multiple possible paths towards long-term goals.
- Keeping alert to dual-use scenarios.
- Allocating research effort to mitigate harm.