HomeSocial Impact HeroesC-Suite Perspectives On AI: Kelsey Behringer Of Packback On Where to Use...

C-Suite Perspectives On AI: Kelsey Behringer Of Packback On Where to Use AI and Where to Rely Only…

C-Suite Perspectives On AI: Kelsey Behringer Of Packback On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

Stay curious and prioritize learning!

Every day I learn something new about AI whether it’s a new tool, study, or policy. There’s a lot of noise, but in that noise, there is incredible insight and opportunity. Make sure you’re taking the time to keep up and learn. I subscribe to a few daily newsletters — my favorite is the neuron — and try to listen to a few podcast episodes a week regarding AI.

As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Kelsey Behringer.

Kelsey Behringer has trained over 10,000 educators around the intersection of A.I. and pedagogy, helping educators implement best practices that advance student voice, agency, and purpose. She is the CEO of the education technology organization Packback, which is an AI writing tutor for every student and an AI grading assistant for every teacher. She has held numerous roles at the organization from being an educator trainer, manager, director, VP, and COO before starting as CEO.

Packback’s A.I. has been used by over 2,000,000 students and 600 colleges and districts to help every student develop their own unique voice through receiving live, instant, real-time feedback as they write.

Kelsey approaches her work through the lens of her experiences as a former STEM teacher in Chicago and Indianapolis. She is a Fast Company Executive Board contributor and hosts monthly webinars for teachers and professors where she gives recommendations about A.I. in education.

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

I studied Chemistry at Indiana University with the intention of becoming either a doctor or industrial chemist. However, after being my friends’ go to Organic Chemistry tutor, I realized that my passions lie outside of a lab, and instead in a classroom.

After graduation I went on to teach high school chemistry and physics in Indianapolis, then back home in Chicago. During my time as an educator, technology was my right-hand-man. I struggled with consistent attendance from a good population of my students, so I rented the website domain “missbchemistry.com” for $25/year and built a website that included daily notes and curated YouTube videos. I dealt with a lack of engagement from my teenage students, often stemming from instability at home, hunger, or just good old-fashioned angst. So, I used just about any free or low-cost gamification tool in my classroom. After a few years of weaving technology into my teaching practices, I felt a pull to evangelize the use of technology in the classroom professionally. That’s why I joined the team at Packback as an account manager and employee number 26.

While I was scaling and growing Packback’s Customer Success function, I was also doing some very fun (to me) projects regarding product and AI training. To this day, every new employee takes a multi-day online product lesson I built back in 2018. To be an excellent trainer of a topic, you need to be an expert, which is how my fascination and not-very-technical expertise of AI got started. Furthermore, Packback was the category creator for this type of product. There was no other platform in higher education at that time that provided AI-powered feedback, moderation, and grading features, all integrated into the workflow of a graded assignment. I needed to understand our AI infrastructure and philosophy if I wanted to successfully grow our user footprint.

Alas, in late ’22 when GPT absolutely blew up, I saw an opportunity to commit to providing free and high quality supplemental AI training to not only our customer base, but all curious education faculty. Since January of 2023 I’ve been facilitating bi-weekly AI webinars and workshops in coordination with some of our wonderful partners, including The League for Innovation in the Community College, as well as publishing blogs and articles containing recommendations, predictions, and case studies about how AI should be used in educational and non-educational settings.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

Where to start? For every huge win and exciting milestone, there’s a corresponding slip up. What I’ll share is an example that I think acts as a good philosophy regarding how we should integrate AI into the workplace.

After some board feedback, I was working on a project to reduce our customer maintenance costs. I spent a summer trying to understand how we could increase the customer to account manager ratio by 75%. Everything in the project was going well, but the one problem I didn’t yet have a solution for was how to reduce the amount of time spent by an account manager answering technical emails and requests from customers.

My solution was to set up email forwarding so that every email sent to an account manager was forwarded to our product support queue.The vision? Our product support team would “triage” account managers inboxes, claiming the technical or product support emails that our account managers would often receive.

Great idea in theory! Terrible in reality. First, we kind of shoved a square peg through a round hole setting up our support ticketing instance to make this process work. We hacked together a solution based on our vision instead of rethinking the vision to fit the technical abilities of the tool. Second, we created more problems and headaches than we solved. We slapped a solution together thinking it’d save 20–30% of our account managers’ time, when it probably cost them time and created frustration. After a few months of the change, we reverted the initiative. My mistake was not fully understanding the root of the problem before implementing a solution, as well as forcing a technology to meet my vision of a process.

The issue wasn’t that it was taking too long for the account managers to answer technical emails, it was that they were receiving these emails in the first place. Turns out, what needed to change was our onboarding practices so that customers understood technical issues would only be handled by our technical support team.

My takeaway from this ordeal was: before you decide to implement a solution, make sure you do the research to fully understand the root of the problem. Then, take your time understanding the right path to the solution. Be open to changing your mind! Your vision of how to solve the problem may increase — vs. decrease — friction. As wonderful, shiny AI tools for the workplace start popping up everywhere, don’t lose sight of this advice!

Are you working on any exciting new projects now? How do you think that will help people?

Our product and engineering team has been working on a new tool for the last ~9 months that just recently launched, called “Writing Lab”.

Writing Lab is our FIRST direct to student platform, and it functions as an individualized version of our longform writing platform, Deep Dives. Writing Lab brings the writing center to students’ fingertips by providing them with customized, AI-powered feedback on their writing, allowing them to learn and iterate as they write. In a post-ChatGPT world, this product teaches students HOW to write, never writing for them.

Our goal with Writing Lab is two-fold: first, make the writing experience even more magical by creating customized, genAI-powered feedback loops that are more challenging to create with stringent algorithms that developers may use for things like grammatical writing feedback. Second, introduce the use of generative AI in learning and writing workflows in ways that do not compromise academic integrity, data privacy, ethics, nor the learning process. Our use of generative AI helps students develop their critical thinking skills and writing skills instead of simply writing for them. Along the way, we’ll teach students how to properly cite generative AI when they use our chatbot (yes each student has their own personal chatbot to help them flush out ideas!), and help students proactively check for unethical use of AI or outside sources before they download and submit their work to their instructors.

I’ve been able to read a good amount of research about student voice and concerns regarding AI, as well as interface directly with late high school and college students. The reality is many students know their lives are going to be drastically changed by AI but are far too scared about “getting caught” to touch it. When they see ChatGPT being used on TikTok, it’s not to enhance their writing skills, but rather to get a quick A on an assignment in 45 seconds. To cheat. Students don’t have a great option for an academically honest AI-writing tool, and even more pressing, they’re not being taught how to interact with AI. My hope is that Writing Lab teaches students how to safely and productively interact with, and benefit from, artificial intelligence.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

Despite being a software company that builds and provides AI, we haven’t been too quick to start weaving AI-automation into all our business processes. So, I’ll start by stating that my experience incorporating AI into business operations can be classified as “advanced beginner”.

That said, the most challenging aspect of AI-integration, and a reason I’ve been rather slow to innovate, is the sheer volume of choice right now.

Consumer chatbots like GPT and Claude are helpful to automate or support certain tasks, but they’re not out-of-the-box solutions with hyper-specific workflows to address a specific need or outcome. Then, when I search for “AI-powered email marketing automation tools” I’m bombarded with options. The GenAI boom has led to some awesome innovation and the rise of new software solutions, but that also means it’s hard to verify the authenticity or effectiveness of a tool that sounds exciting or valuable. Simply put, there’s a lot of noise to navigate as I look for AI solutions and tools. I am not yet at a state in my AI-implementation journey where I’ve had to address preservation of our employees’ humanities, although I’m sure that’s on the horizon!

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

Funny you ask this, we experienced this exact conundrum quite recently. We are currently undergoing a big project to optimize and put more of a focus on SEO, with much of that workload being directed to our marketing department. However, as we’ve come to find out, SEO optimization and research is a pretty time consuming process. It takes a while to source the right keywords, keyword phrases, and metatags to create optimal visibility on google, so we tried to expedite that process by using AI to help source keywords and keyword driven content.

The problem? AI can’t (at least not yet) factor the user experience into creating an SEO optimized blog, article, or webpage and Google’s algorithm greatly factors in the perceived user experience of a page when it decides how or where it will rank on a search engine. The content needs to be relatable, relevant, readable, and consistent. At this stage AI is just not reliably able to do that. AI creates a page that is difficult to read and forcibly stuffed with so many keywords that it does more harm than good.

This experience taught us that sometimes there just is no replacing the human element to things. While AI can act like a human and produce content like a human, I find shortcomings when we ask it to think or read like a human.

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

I allude to this in a later question, but I’ve found it necessary to train my team on ethical AI usage given we often interact with sensitive data. One of the biggest ethical concerns I have at this moment is that employees are mishandling sensitive customer data in LLM-powered chatbots. I’ve made sure to teach my team about data privacy, LLMs, and communicated continually that no customer data should be used in an LLM. I’m currently in the process of developing and publishing a generative AI policy for my team.

The other consideration I’m navigating now is utilization of AI for written performance reviews. I am not against a manager using generative AI to help write a performance review or get feedback on the wording or tone, but I am against:

1. A manager sharing sensitive or specific information (i.e. an employee name, a customer name, or specific performance metrics) in an LLM and
2. A manager using an LLM to write a performance review without any personalization, editing, or highly thoughtful prompting. A thoughtfully-worded piece of constructive feedback from a manager could materially change an employee’s performance and job satisfaction. A flippant, vague, or even inaccurate piece of constructive feedback can sever trust between an employee and a manager, even the employee and the organization.

I have no plans now nor in the future to eliminate positions currently occupied (by human employees) and replace them with AI automation. Having said that, AI is certainly in my plan to scale and I am hopeful that it’s mutually beneficial. I’d like to use AI to limit or decrease operating costs where I can, and at the same time use it in a way that allows my team to add variety to their roles and learn new skills. If I can automate low-value the responsibilities of a job description and then give that same employee new, more strategic responsibilities, I believe both the business and the employee will be better for it. My job as someone who cares about employee performance and employee satisfaction is to make sure I strike the balance between “good for business, good for employee”.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

A few months back, a member of my team shared the tool “scribehow.com”. This tool captures your screen while you walk through a process (let’s say creating an account on our website and setting up a course with highly specific requirements). It then translates everything you do — scroll, type, click — into an editable document. Process documentation that would usually haven taken 30+ minutes has now boiled down to a few clicks… it’s incredible. What’s more, it’s allowing us to make much more customized materials for unique customers on demand because we have the capacity and it’s so easy.

It’s a highly specific tool and a small win, but an excellent use of AI to make someone more efficient and effective.

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career? Please share a story or an example for each.

1 . Use AI to boil some toil.

“Toil” is a concept discussed heavily in engineering operations. My CTO, Craig, introduced me to the concept of toil, and the desire to minimize it (hence, “boil the toil”). “Toil” represents tasks that are repetitive, manual, endure no value, and are automatable. Simply put, it’s often the daily or weekly tasks your employees have to do, don’t want to do, don’t provide high strategic value, and are maybe even “below their pay grade”.

Back in fall, a study from BCG, Harvard Business School, MIT Sloan School of Management, the Wharton School at the University of Pennsylvania, and the University of Warwick found that OpenAI’s GPT-4 improved 90% of consultant’s performance when used for certain tasks, like creative ideation.

My recommendation for business operators and teams is to start delegating tasks that can be defined as toil to AI.

2 . But, choose the right toil to boil.

The study mentioned above also found that consultant performance decreased for some employees on certain tasks, like problem solving. Not all tasks should be delegated to AI or automation for the sake of employee performance.

Make sure you’ve identified the right tasks — or toil — to augment with AI tooling. There’s nothing worse than investing in a new tool only for it to make your team less effective. You can do this by studying the strengths and weaknesses of AI and making sure any AI automation is only used for tasks that align with AI’s strengths.

In a similar vein, make sure you do proper research on your team’s toil. Ask them to track how they spend their time and what tasks they’d want to automate or remove if they could. If you — a leader or operator — assume toil, you may end up spending good money automating a task that wasn’t taking that much time, or worst of all, you may automate something that your employees found fun or rewarding.

3 . Don’t accidentally be a headline 🙂

Back in February of ’23, I read that 11% of data employees pasted into ChatGPT was confidential.

Many occupations handle classified or confidential data. For example, medical professionals must follow HIPAA protocols which protect patient’s private information, and education professionals follow FERPA protocols which protect student data. Most businesses and occupations handle sensitive customer or user data in one way or another. If your business routinely handles sensitive data, make sure you have a generative AI policy in place for your employees. Make sure your employees read and sign the document just as they would an offer letter or employee handbook.

Your employees — heck you as an operator — may not even realize that when you use consumer sites like ChatGPT or Claude that every word you use in your prompt will be added to the dataset and used to train the LLM.

I imagine we’ll see headlines soon enough to the tune of “Lawyer on trial for uploading thousands of classified documents to ChatGPT”. Make sure that isn’t you, and start training your employees now.

4 . Delegate, delegate, delegate.

Your head of product or CTO can’t be the only person in your organization that is researching and implementing AI into their workflows. They also can’t be the sole keeper of knowledge regarding AI best practices, data privacy, and general tooling infrastructure. That’s a recipe for disaster and a sure-fire way to become a headline.

Delegate research and implementation tasks to department heads and leaders. Help them meet the specific needs of their teams and hone their excellence as systems engineers.

Create a committee or task force to help do things like research and recommend new free tools, research and recommend AI policies, and create or curate supplemental training materials for your team.

What could be an overwhelming burden for you could be a stimulating professional growth opportunity for a hungry leader in your organization.

5 . Stay curious and prioritize learning!

Every day I learn something new about AI whether it’s a new tool, study, or policy. There’s a lot of noise, but in that noise, there is incredible insight and opportunity. Make sure you’re taking the time to keep up and learn. I subscribe to a few daily newsletters — my favorite is the neuron — and try to listen to a few podcast episodes a week regarding AI.

Find industry-specific resources or influencers you can follow to keep up with the most relevant content. As someone in the educator sector, I follow Ethan Mollick — associate professor at The Wharton School of the University of Pennsylvania — on every medium possible.

And while you’re at it, make sure your employees and teams are keeping up with their curiosity. That might mean investing in a professional development budget for your team or dedicating paid time off for folks to take classes, read literature, and learn important skills and concepts around AI.

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

I am quite excited to use AI for marketing automation practices, including content personalization to increase open and engagement rates in email copy. This is a project I am working on right now with a small task force which has been quite fun. I’m looking forward to researching and implementing AI automation in areas of my business that have general under-investment but high upside, or for tasks that my team has identified as “maximum toil”. Right now, our Director of Product is supporting a company-wide research project on toil. Every employee is tracking their daily tasks for two weeks, and then will share a list of the top 5 things they do in a day, broken down by how much time is usually spent on that task each day. We’ll let the employee share the 1 or 2 things they’d personally like to leave behind, and we’ll use that as our roadmap to boil at least 30% of our toil in 2024.

Where I still expect my human employees to avoid or limit AI use includes strategic decision making, performance management, and culture management. I would be okay with implementing an AI software that identifies current customers who are showing early signs that they are churn risks, but I would not be okay with AI deciding what action to take and then taking the action.

I’m also not keen on using AI to entirely generate important and high-value internal communication, like the weekly memo I send to my company. I want that memo to have my tone and voice, be highly specific, and ultimately inform perceptions and attitudes. That is a piece of content I probably won’t ever fully delegate to AI automation.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I see a lot of gaps at the intersection of AI and education right now, and the gap I’d like to directly address with my influence is AI-curriculum development and training for all students. Our veteran educators are not necessarily going to be impacted by the AI-job-displacement revolution that is coming in 5–20 years. Their students, however, are going to be facing that reality.

Our veteran educators may have no desire or appetite to go out of their way to introduce their students to AI, teach their students about AI, or even simply discuss concerns around AI advancements and automations. But an educator’s preference can’t keep students from facing and processing what’s coming. I would love to be involved in creating and implementing a broad AI curriculum across K-12 and higher education, and would love to be at the forefront of educator AI professional development. I think the key to success here is standardization and cost. Every student needs to be able to access this curriculum regardless of district or college, and the quality of the curriculum can’t vary from one state or institution to another.

So, I suppose the movement is “free, high quality AI training for every student and educator”.

How can our readers further follow your work online?

Readers can find me all over the place. I’ve hosted dozens of webinars focused on the intersection of AI and education which can all be found on our Packback YouTube channel.

If written word is more your vibe, you can find me regularly featured on Packback’s blog, the occasional article I write for FastCo, or my own thoughts on AI and education via my LinkedIn page.

As I like to jokingly tell people, google me, you’ll find plenty!

This was very inspiring. Thank you so much for joining us!

About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.


C-Suite Perspectives On AI: Kelsey Behringer Of Packback On Where to Use AI and Where to Rely Only… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.