When it comes to Artificial Intelligence, we all marvel about its ability to analyze data quickly, save time, and produce impactful results. We like to play with it, create fun memes, and even ask it for help and advice. Despite our quick embrace, however, there remains a lingering resistance to it. But why? In my opinion, it all boils down to fear; the fear of losing humanity, the fear of displaced jobs, and the fear of the lack of empathy associated with tech.

As an EdTech founder, I am always surprised by the discomfort that new technologies bring. Sure, historically, technology has and continues to make some jobs irrelevant, but at the same time, it also has the power to create new jobs and make our current tasks more efficient. There’s a balance. Advanced tech is best when combined with the human touch, making systems and ideas better.

I’ve been giving a lot of thought to this topic lately and believe that the fear can be broken down into two main categories: 1) Consent and 2) Control. People are afraid of things being taken from them without permission, and rightfully so. Consent is essential and involves agreeing to technology’s use of our image, likeness, data, thoughts, ideas, and creative expressions. These worries have been exacerbated by recent cases like the Anthropic class-action lawsuit. I am in the unique position to see both sides of this issue. Besides being a tech founder, I am also the author of a book that was caught up in this case. As an author, I don’t want my thoughts and creative ideas used without my consent or knowledge. As a founder, I understand why this happened, although I don’t agree with it. Would there have been a different outcome if Anthropic had been more transparent with the authors and publishers? I think so.

The second major factor at play here is Control. There’s a lot of fear around being able to “control” the output of AI and technology tools. We want to know what the LLM will say to our customers, our children, our friends, and those we care about before it actually does. For some tools, there is limited control and that is a problem. In my opinion, it’s about thoughtful design- putting strong mechanisms in place- so that customization and personalization are there, but within certain parameters.

Adding these features is certainly at the forefront of my mind as I continue to develop my AI-infused platform. I want to ensure that the user experience is meaningful, planned, consistent, and deliberate, not something that ebbs and flows on its own, and that the tool is being used in the way it is intended. I want my customers and those they serve to have a high-quality experience, which can be easily accomplished without sacrificing what makes many AI-infused platforms so amazing and useful- the ability to personalize the journey.

It’s also critical to take a peek at the creator behind the system. Do they have relevant experience in the industry? Have they given significant thought to the various stakeholders who will be using the tool? Are they solving substantial and meaningful problems in their field and ensuring that quality is maintained? Are they gathering extensive input from others in their discipline and running pilots to gather even more feedback?

I don’t have all of the answers when it comes to this topic, however, I believe that if responsible developers, with industry experience, lean into features that maximize control and consent, they will produce valuable products that enhance the user experience, alleviate fear, and really merge the best of both worlds: AI capabilities and human delivery and design.