In my early work as a writer and editor, I noticed a lot of patterns in language. These observations led me to the field of natural language processing. I wanted to research methods to automate editing, solve problems in communication, and ultimately, identify ways that clearer language could positively impact people’s lives.

Technology enables communication in many ways. It helps us identify and better convey the most important information, and reach a broader audience with that message. But these advancements also have a cost: information overload. Sifting through messages and finding information that’s actually relevant to us has become much more difficult.

As a research scientist, my work isn’t just about big data. My job is to merge powerful insights derived from data with the creation of something special, impactful and meaningful — and to be conscious of how that can have a positive or negative impact for the future of communications. It’s a mix of art and science. Here’s my take on three leading trends in my field, and why each needs the right balance of art and science to work.  

Enriching automatic models with real-world knowledge

Artificial intelligence (AI) is advancing many fields forward, including communications. AI can help marketers better strategize, translate content into different languages, segment audiences and even write some content for us — just take this writing robot who made it past the first stage of a literary contest in Japan a couple years back!

Around the world, AI is being used to remove language barriers to help people communicate better. Today’s translation models have made great leaps in achieving near-human performance — for example, translating Chinese news articles to English. But the data that’s driving AI is only part of the equation. For the technology to make more of an impact, it needs an infusion of real-world knowledge. That will enable better recommendations and suggestions on how to rewrite text and generate messages.

At Grammarly, we rely on AI to provide automatic feedback to our users. Alongside the automatically generated suggestions, we supplement that feedback with hand-curated content written by experts (linguists and editors) that explains each recommended change. This elevates the product from a magic-wand that fixes mistakes to an educational tool that can also help people become better writers over time.

Recognizing bias and encouraging diversity in the research community

In 2018, there was a concerted effort to train AI models to identify bias in other models and input data and find methods to reduce that bias. The problem is not the data itself or the algorithms, but the input data. For example, if a hiring algorithm is taught using data about successful past candidates that includes mostly older white males, the AI will recommend older white males. The software learns to perform computational tasks by looking for patterns in data sets. That’s why a dataset needs to be constructed carefully to avoid inducing social bias. A recent study conducted by MIT and Stanford researchers found gender and skin-type biases in commercial AI systems, such as facial-analysis software. In the experiments, error rates in determining the gender of light-skinned men were very low — never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned up to 34 percent in some cases.

Aside from recognizing the problem, we need to train organizations using AI to recognize implicit and unconscious bias and address it. To combat bias, we need more than just the data sets to be unbiased — we need more diverse researchers to represent different worldviews and experiences. This is the human element, the “art.” While this change won’t happen overnight, I believe more people are thinking about biases and the lack of diversity that exists in the research community and attempting to solve that.

Bringing humans in the loop to help explain the reasoning of machine learning models

Methods for applying AI to new problems are on the rise: We’ve witnessed great improvements due to transfer learning and there has been more focus on reinforcement learning. Domain adaptation is increasingly a big focus: How can we leverage a model trained on a known problem to move to a new one, relying on less labeled data? At the same time, there is more demand for transparent, or explainable, AI models. Consumers want to know why models make the decisions they do (this is even explicitly codified in the GDPR). However, as we train more complex AI models and apply decisions from one domain to a new one, the reasoning behind the model’s decisions is less clear. It isn’t enough to know how to engineer a model and rely on automatic metrics to evaluate its performance. We also need to use analytical reasoning to understand a model’s decisions, determine whether it is trustworthy and robust, and communicate these justifications to people who rely on it.

Both science and art share a common goal: to better understand the world around us. They are closely intertwined in what I do every day. The job of a research scientist is not only predicated on analytical skills but also the ability to open a dialogue: to be in tune with and listen to a broader audience, present and communicate insights and know how to incorporate that experience into their work. An old quote by Steve Jobs still rings true today, “Technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.”