HomeSocial Impact HeroesC-Suite Perspectives On AI: Dan Javorsek Of EpiSci On Where to Use...

C-Suite Perspectives On AI: Dan Javorsek Of EpiSci On Where to Use AI and Where to Rely Only on…

C-Suite Perspectives On AI: Dan Javorsek Of EpiSci On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

Legal/Moral/Ethical Implications: Anytime legal, moral, or ethical implications are at play it should always trigger human oversight. While the contextual situation will ultimately dictate the extent of human involvement, these all require some reliance on human judgment. Broader DoD guidance is such that humans will always be on the loop in some capacity when these concerns are raised.

As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Dan Javorsek.

Dr. Dan Javorsek, PhD, is the Chief Technology Officer (CTO) of EpiSys Science, Inc, (EpiSci) a software startup that develops next generation, tactical autonomy solutions for national security problems.

Prior to joining EpiSci, Dan was a Colonel and the Commander of the Air Force Operational Test and Evaluation Center, Detachment 6.

Throughout his Air Force career, he commanded thousands of personnel executing Developmental and Operational Test and Evaluation for over $9.6B worth of A-10, F-15, F-16, F-22, F-35, F-117, and the Next Generation Air Dominance (NGAD) Family of Systems (FoS) aircraft including Penetrating Counter Air (PCA) and Collaborative Combat Aircraft (CCA) acquisition programs.

Dr. Javorsek is an accomplished test pilot with over 2000 hours of experience in demonstrator, prototype, and operational aircraft while simultaneously remaining active in nuclear astrophysics research.

Between operational and flying test assignments he performed duties as a Program Manager in DARPA’s Strategic Technology Office (STO) and as a researcher in the Intelligence Community with projects at the Defense Intelligence Agency (DIA) and the National Counterproliferation Center (NCPC).

Throughout his career he has diligently pursued the democratization of software that prevents vendor lock and exposes the latent capability to adapt that is resident within the existing warfighting system.

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

Absolutely and thank you for the opportunity to share a bit about what we are doing at EpiSci. I joined EpiSci after a career in the Air Force as an Experimental Test Pilot, Program Manager, and Commander. After a quarter of a century watching DoD acquisitions stuck in a defunct model focused on the antiquated idea that new capabilities only come from new hardware, I saw an opportunity to better help the warfighter out of uniform than in it.

The DoD is currently poised at an inflection point and is on the precipice of embracing the supremacy of software. Software has powered the digital revolution that is shaping our domestic lives in the 21st Century. Your cellphone is more capable today than it was when you bought it…not because it has received some new hardware but because of the software. While this is obvious to anyone with a cellphone, the DoD and the traditional defense industrial base surrounding it have been betting on an old hardware model. And who can blame them. Since antiquity, to get a new capability, militaries had to physically build something new. From the English Longbow at the Battle of Agincourt in 1415 to the airplane, the story was always the same. A new capability required new hardware.

The modern era, however, is characterized by something different. While hardware may meet a requirement, software has the potential to actually solve the problems warfighters care about. As one of a select few truly software-only companies, I think EpiSci is best situated to capitalize on the reset underway in the DoD as it democratizes software on tactical platforms to account for the technical debt associated with our lingering 20th Century fixation on hardware.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

When I was a young test pilot, I was testing out a new missile cueing system that was basically comprised of a mini-projection screen located in my helmet. Before that system, pilot helmets were just fancy motorcycle helmets without any embedded electronics, so this was adding a lot of new capabilities.

As is typical of flight tests with new equipment, my flight was accompanied by a control room of about 25 engineers who were supporting and monitoring the test with telemetry of the video from my on-board screens and displays, including the view from the new helmet. As luck would have it, it was a long mission and my several cups of tea that morning became overwhelming so I told the control room that I would be “Racehorse” for the next 5 minutes (Racehorse was the brevity word we used on the radio to indicate we were using the bathroom). Now in a single-seat fighter jet this is not an easy task since there isn’t a lavatory like you might have on a commercial airliner. The task is also complicated by several layers of bulky equipment that makes the exercise a bit more like a contortionist routine than a simple trip to the restroom.

It is worth highlighting that the new modification was recording the view from my helmet, so I definitely didn’t want that projected to the control room. To save the control room from an unpleasant view of the Racehorse ritual I selected the OFF position on the main telemetry switch.

Unfortunately, I had not studied the aircraft helmet modification plan closely enough to identify that the helmet camera feed to the control room was separate from the rest of the telemetry, meaning that the main telemetry switch I had thrown had not actually terminated the helmet feed. You guessed it. While I relieved myself happily thinking I was doing so in private (since the telemetry was switched off) I gave the control room quite a show.

In debrief, after discussing the successes of the mission, we had a bit of a chuckle when they finally let me know of the embarrassing moment that had occurred. From then on, I discovered test teams really do get to know each other more intimately than I had ever anticipated, and I always made sure to read the aircraft modification manual exceptionally carefully…especially the section on the recorder.

Are you working on any exciting new projects now? How do you think that will help people?

Absolutely! The beauty of being a software-only company is that we must look for opportunities to reuse as much of our code as we can. We are finding that the math is truly agnostic of its application such that collaborative autonomous behaviors actually apply to much more than just embodied platform maneuvers. It extends down into the resource management of sensors and similar subsystems, as well as up to battle management command and control layers where campaign-level decisions are made.

Actualizing these concepts with so many different applications is exciting to me because I am confident that the autonomy we are developing and deploying will save lives in the future. Younger, better-looking versions of me will return home safely to their families because our autonomy software will elevate their performance and allow them to get inside the decision cycle of their adversaries.

Far too much of my generation’s time was spent teaching fingers as we had to learn how to finely control all aspects of the aircraft…our autonomy allows future efforts to focus on teaching brains how to outthink their opponent, which is a much better use of the human on the loop.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

The most challenging aspects of integrating AI into any operation arise from a rather paradoxical artifact of nearly any emerging disruptive technology — the very humans who will benefit from the technology generally resist it. It is natural to ask why, and I believe it is because they feel threatened.

From the ballad of John Henry, the steel-driving man who took on the steam powered drill at the Big Bend tunnel in 1870, to the US Army’s Horse-Mounted Cavalry of World War II threatened by the internal combustion engine of the German Blitzkrieg, the story has always been the same. A new technology is here to disrupt your way of life and you should be afraid.

This concept goes far beyond military applications and for AI, permeates nearly every industry as the fear extends from the Blue-Collar workers of the industrial revolution to the White-Collar ones today. In each case, the new technology often threatens the heritage, honor, values — even the dignity — of those sacrificing daily for the mission.

For example, if you talk to any Naval Aviator, I can guarantee they will be quick to highlight the number of “cats” and “traps” they’ve made on an aircraft carrier. It’s important to them, it defines them. So, ask yourself, what happens when we give them autonomous solutions that will do that carrier landing 100 percent of the time, better than they do, no matter the circumstances. In a way, we are telling them that the thing they value in their contribution is being automated away…that we value new traits and new skills that might be very different from before. When viewed through this lens, it is obvious why they resist autonomy, and this will be much worse with AI able to perform tasks we used to think were uniquely human.

At EpiSci, we have chosen to focus our work on trust. We deliberately introduce our technology slowly and with tight feedback from the aircrew. I often like to say that we “co-evolve the tactics with the technology” to minimize friction at the interface and maintain custody of the trust we build. This approach is the best way to arrive at a human-machine team that is properly calibrated to understand the situations where the autonomy works well, and which situations will require us to preserve our human-centric roles.

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

It is hard to appreciate how much context is involved in nearly every important military decision. As one might imagine, when the consequences and uncertainty are high, so too are the chances that human intervention will be required.

In combat, every lethal decision is loaded with legal, moral, and ethical implications. While it may seem that the Rules of Engagement, Special Instructions, Commander’s Intent, and other formal guidance is clear, in practice the actual situation on the ground or in the air rarely, if ever, matches up with the conditions outlined in the guidance. In nearly every case, a level of human judgement must be applied, and this is why it is so important that our warfighters learn with our systems.

We need operators to calibrate to which situations the automation can be trusted in, and more importantly, in which situations it will risk failure. In the past, many of the roles and decisions that humans made were because our technology was insufficient or incapable of making them. As AI algorithms like ChatGPT and others popularized in the media get better at things we thought were uniquely human, we are forced to ask ourselves important questions.

In the end, we are no longer in the position of simply asking what autonomy/AI can do, we now must ask what it SHOULD do.

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

In the field of AI, we often talk about “machine learning” but I think that term is a bit misleading. I say this because really what we have are “machine trained” systems and I use this term because, as has been captured in a large number of books and articles on this topic, these systems rely heavily on data to figure out how to behave or respond in a given situation.

The hang up with this model is that those behaviors are only as good as the data provided. Many companies have already discovered that if the data retain features or biases of how they were collected, then so too will the behaviors. This concept extends beyond just the data and can also apply to the algorithms themselves. In the end, the awareness of these effects can be a very helpful tool in combating these unintended consequences and it turns out that this is a very active field of research in our AI community.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

We are fortunate that aircraft automation and battle management command and control challenge problems are filled with examples of AI and human skills synergistically combining.

While DOD Directive 3000.09 provides a nice first hack at the importance of humans on the loop in decisions involving lethal outcomes and those with legal, moral, and ethical considerations, the military is filled with systems that have humans regularly performing very menial and repetitive tasks. These are the low hanging fruits that yield significant improvements. Narrow tasks with clear success or victory conditions that are well-defined is where we see human-machine teams work best.

The caveat is that the success of these implementations is highly dependent on the interfaces and how much operator involvement occurred along the way. While our machine teammates excel at data acquisition and processing, our humanity is often our superpower allowing us to get inside our adversary’s mind while simultaneously capturing the context, intent, and sentiment associated with the situation. When combined such that we play to each of these respective strengths we see significant increases in the team’s overall performance.

My wife and I have two daughters and four horses, so we are very familiar with building trust in non-human minds. In fact, when discussing the human-AI teaming relationship I think we can learn a lot from traditional cattle work. I often use cutting and team penning as good examples because they epitomize the relationship we should be striving for. Cutting a cow from their herd is no small feat. This is because over millions of years the cows have learned that there is safety in numbers and through some rather simple neighbor signaling (similar to that of schooling fish or flocking birds), they can create remarkable response times that are critical to their survival. If you tried to cut a cow using a motorcycle you would soon find that it is nearly impossible to do because you cannot get the bike to respond quickly enough. Basically, your reaction times coupled with the delays from your equipment are insufficient to get inside those of the cow.

However, if I put a good rider on a solid cutting horse it is almost easy. This is because when this Human-Commanded, Horse-Controlled system reaches the right synch (which we call harmony in equestrian circles) it works wonderfully. The human communicates to the horse which cow needs to be cut and then gives the horse the reigns meaning that we transfer control to them. As another herd animal with similarly refined reaction times honed by millions of years of survival, they make cutting possible.

In this team, we partner our contextual understanding of the broader objective with the fast reaction time of the horse to accomplish what neither member could on their own. This is the reason I often state that when we develop our autonomy we are striving for “Human-AI Harmony” and I frequently talk about a future that is “Human-Commanded and AI-controlled.”

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?

  1. Legal/Moral/Ethical Implications: Anytime legal, moral, or ethical implications are at play it should always trigger human oversight. While the contextual situation will ultimately dictate the extent of human involvement, these all require some reliance on human judgment. Broader DoD guidance is such that humans will always be on the loop in some capacity when these concerns are raised.
  2. High Consequences: When the consequences are high as is often found in combat situations, human oversight should also be immediately triggered. Clearly defined rules of engagement and other doctrinal guidance will help assess how much autonomy might be warranted. However, it is important to note that this is a bilateral assumption. Nothing adjusts someone’s risk calculus and their willingness to accept less surgical employment than when they themselves are threatened. Nearly all consequence thresholds and definitions are context dependent and rely on the nature of the environment in the moment. The more that these thresholds can be determined a priori the more successfully it will be executed in real time.
  3. Co-evolve the Tactics with the Technology: By keeping operators involved throughout the development process we help build trust in the system. This is enhanced by periodic resets and even regression to earlier versions. Although expert systems and finite state machines are frequently considered unfashionable in the modern AI/ML community, they serve an important purpose in gaining and maintaining custody of operator trust.
  4. Properly Calibrated Trust instead of Blind Faith: We have observed a generational asymmetry in the willingness to incorporate and embrace AI/Autonomy in a wide range of applications. While more mature generations tend to be overly skeptical and distrustful of the new technology, the opposite can be true of digital natives more comfortable with the tech. Given the hype surrounding much of the anthropomorphized language used frequently in much of this field, it is easy for uneducated leaders to view it as a universal solution when it is not. By following #3 above we set ourselves up to minimize these effects.
  5. Human-Commanded, AI-Controlled: In the end, we must place a high emphasis on minimizing friction at the interface between the human and their machine teammates. The more we can emulate the kinds of partnerships often found in other inter-species collaborative working relationships (such as those with horses, dogs, or raptors) the more primed we will be to look for ways in which our unique talents complement each other.

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

AI will continue to grow in prevalence for narrow focused and repetitive tasks. This will be especially true when large, well-curated datasets exist. However, many fields will require a human touch. The empathy that humans can show for each other will be a very powerful tool and plays more of a role than is often appreciated. I think AI will help us automate a number of routine processes which will add consistency and reduce the errors often associated with manual human entry.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I’m not sure about the great influence part but while many people are worried how generative AI will negatively influence western democracy, I wonder if it may simultaneously be our last line of defense. As public trust erodes and it is increasingly difficult to agree on what used to be universally considered objectively true, the virtues of these models may allow us to help us sift through the cacophony to arrive at what is actually important to us. As this technology begins to permeate nearly every aspect of our lives, I suspect it could aid in helping us decide what to ignore while exposing how susceptible humans are to this specific form of manipulation.

How can our readers further follow your work online?

You can follow us on LinkedIn (https://www.linkedin.com/company/episci) or via our website at https://www.episci.com

This was very inspiring. Thank you so much for joining us!

About The Interviewer: Kieran is the EVP of Channel V Media, a Public Relations agency based in New York City with a global network of agency partners in over 30 countries. Kieran has advised over 150 companies in the technology, B2B, retail and financial sectors. Previously Kieran worked at Merrill Lynch, PwC, and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can help achieve your company’s business goals.


C-Suite Perspectives On AI: Dan Javorsek Of EpiSci On Where to Use AI and Where to Rely Only on… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.