Skip to main content

Recognition of the strategic importance of AI. The importance of AI is being elevated to a national level where capability is seen as a nationally-strategic priority. While that means more government funding for research, it also means that some nation-states are closing themselves off to the free flow of research and circling their wagons. This doesn’t do anyone any good. With the increasing stratification of nation-states along ideological/geopolitical lines, I don’t see this getting any better in the near term and that is very concerning.

As part of our series about the future of Artificial Intelligence, I had the pleasure of interviewing Handol Kim.

Handol is Co-Founder & CEO of Variational AI, a Vancouver-based AI for drug discovery start-up developing novel therapeutics in oncology and beyond. He is a three-time tech entrepreneur with 20+ years in executive leadership roles in AI/ML, quantum computing, software, cloud, and semiconductor in start-ups and public companies in Silicon Valley, Canada, and Asia-Pacific.

Thank you so much for joining us in this interview series! Can you share with us the ‘backstory” of how you decided to pursue this career path in AI?

Myjourney in AI started at a quantum computing company called D-Wave Systems. I was brought in to run the business side of their non-quantum AI software and services business unit, after previously working as their VP of Business Development many years before.

It was 2017 and all the smart people I knew from software were getting into AI/machine learning. But it was a presentation by Alibaba Cloud execs when I was selling them cloud software that really made me take notice when they showed that AI was their fastest-growing customer segment. I thought “no way” and even challenged them in the meeting, but they showed me real compute usage data and my mind was blown. I thought “It’s happening now”, and I knew I had to get into AI. I was always a geeky kid and the fact that something from the realm of sci-fi was actually happening in the real world — and at scale — blew my mind.

In 2019, some of the members of my team and I started Variational AI, and it’s been an incredible ride since then.

What lessons can others learn from your story?

Don’t burn your bridges, stay curious, and keep your head on a swivel. I’d left the field of quantum computing back in 2005 when I realized it wasn’t going to be commercializable for some time and there was no B to D (business to develop) while the science was getting figured out. But luckily I left on good terms and was able to come back into the same organization — but in a different role — 12 years later after I saw that Alibaba presentation.

I hate to rehash that old William Gibson quote of “the future is already here — it’s just not evenly distributed” but in the case of AI and pretty much all of deep tech, it’s pretty much spot on. It takes a whole lot of optimism, faith in science and tech, and a really, really thick skin to turn something that sounds like sci-fi into reality and then scale it. I’ve worked in some pretty profitable areas of tech that solved very hard technical problems, but the primary risk was always market-based: we’re pretty sure we can build it, but will anyone buy it instead of buying the competition’s product? In AI and deep tech, it’s all science and tech-based risk. The problem is so hard that if we can solve it, then the market risk is secondary. This is the fun stuff and gets me going every morning when I wake-up.

Can you tell our readers about the most interesting projects you are working on now?

We’re building a drug discovery platform based on a generative model — specifically a Variational Autoencoder (VAE) — ergo our company name! We use this platform (which we call Enki) to rapidly discover novel small molecule therapeutics predicted to be efficacious, safe, and synthesizable, from the vastness of chemical space. Chemical space, a concept in cheminformatics postulating the set of all possible molecules that “could” be drugs, is comprised of an estimated 10^60 molecules (out of a total of around 10^80 of all possible molecules — estimates vary). This is an absolutely enormous space a billion times a billion times a billion times a billion times greater than all the stars in the universe. It’s impossible to search via brute force.

Moreover, it’s a space that we access via molecular structures. We can lay-out all of the molecules in chemical space based on their structures moving in small increments and decrements. Imagine you had a phone book with 10^60 entries ordered alphabetically but what you wanted was to find an address and the name of the person living at this address. What you would want is a map.

The problem is, looking only at molecular structures is a very sub-optimal way of discovering drugs, as the actual properties of a molecule, (e.g., potency, selectivity, synthesizability, pharmacokinetics, toxicity, etc.) cannot usually be predicted by only looking at structures. We have to actually make these compounds and then test them in a wet-lab to validate that they can really become drugs. This is a very slow and expensive process and one of the reasons why drug discovery and development takes so long and costs so much. In fact, the cost of taking a drug from discovery to approval costs around $2.6B (on a capitalized basis) and this cost has doubled every nine years since 1949. It’s an inverse Moore’s Law that the industry calls Eroom’s Law (literally Moores spelled backwards). What we’re doing is using generative AI to learn from molecular structures and project them into a lower dimensional space, called a latent space, to directly optimize multiple drug properties and then project back into the chemical space, finding novel molecules from unexplored areas of chemical space.

We’re trying to change the unit economics of drug discovery and development by using AI to reduce the discovery stage to months from years and increase the probability of success in development. If we can do this, it opens up the possibility to materially impacting patient outcomes for millions. We’ve already validated our approach against cancer, central nervous system (CNS), and most recently COVID-19 targets. For COVID-19, we generated novel and potent molecules targeting the Main Protease of the SARS-CoV-2 virus that are being validated in a wet lab. It’s early yet, but the initial results are very compelling.

But our main focus currently is solid tumor indications in oncology, where we’re going after multi-targeted kinases. We’re building a unified kinase model that will be able to generate novel, highly selective, and computationally optimized kinase inhibitors in weeks. Our AI will be able to use the structural similarity of kinases to make new and better optimized molecules. It’s really exciting!

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

That person would be my father. We have a wonderful relationship now that we’re older and wiser, but it wasn’t always so wonderful. My dad was an entrepreneur and a maverick, and I grew-up in a household that feasted or starved based on the success of his business. It was a typical immigrant experience. This inured me to financial instability from an early age, which hated it as a kid, but it later helped me in the startup world. My father then built a successful business and was riding high when I was in my late-teens and early-twenties, but then lost everything. This taught me to never take success for granted and appreciate how fast fortunes can change.

He then re-started from square one in his late sixties and powered through adversity to rebuild his business. It was around this time that I was running my first start-up. I remember one night when we lost a major deal that was make-or-break for the company, and I was on the razor’s edge of shutting down our company when I called my dad to ask for his advice. I had studiously avoided asking for any advice from him in a misguided attempt to prove that I could make it myself. So, calling him was a big deal for me. After explaining our situation, I asked what he thought I should do, given that he’d been in my situation before. I remember him telling me “Well, maybe you’re not cut out to be an entrepreneur. Maybe you should give up and get a job.”

Like a gangster not wanting his kid to go into the family business, but rather to find something respectable and stable to spare the heartache and pain, he meant it from a place of love and care. I realize this now in hindsight, but when he said it to me, I was dumbstruck at how callous it was, and it worked like a defibrillator on a flat-lining patient. I thanked him and hung up and we found a way to survive and grow. Ultimately though, the business failed — but not that night — and the experience helped me build the armor to survive many more similar situations. My old man is still schooling me, for which I’m infinitely grateful.

What are the 5 things that most excite you about the AI industry? Why?

  1. There are more people entering AI. Before 2011, starting a company and explicitly saying you use AI would make it hard to get funded or to get customers. Before then, during the second AI winter, you had to call it expert systems, cognitive computing, or some other non-offensive proxy name so you didn’t get laughed out of the room. So many smart grad students were dissuaded from studying AI because it was seen as an academic dead-end/research kiss of death. It’s very different now in 2022 as the world has seen the positive impact of machine learning across industries and AI has become hot. In some industries, we’re in an AI summer of love, while in others the green shoots of an AI spring are beginning to emerge.
  2. Growing open-ness to AI as a potentially valid drug discovery modality within the biopharma industry. When we started Variational AI almost three years ago, we were met with way many more eyerolls than nods of agreement when we met with biotech and pharma folks. AI was widely seen as just another fad that promised to revolutionize drug discovery/development but was doomed to be crushed under the weight of its own hype, becoming just another tool in the drug hunter’s toolkit. But over the last six months or so, we’ve been seeing far more open-ness to AI, and it almost seems that if you’re launching a new biotech start-up now you need to have a pronounced AI/ML or data science angle. It reminds of me of where software/tech was a few years ago, where if you didn’t have an AI angle it was hard to get funded.
  3. Increasingly powerful compute. That last NVIDIA architecture jump from Turing to Ampere really accelerated our training times. We’re also seeing some new AI/deep learning optimized processors that hold some great promise. Von Neumann architectures might be going extinct (for AI at least) but thank God Moore’s Law still holds!
  4. Recognition of the value of good (training) data. The proven impact of AI across industries has driven the valorization of quality data. Data is increasingly seen not simply as a by-product of business processes to be locked away and held until an organization has figured out how to use it, but as a strategic asset that when unlocked could deliver multiples of value
  5. Recognition of the strategic importance of AI. There is a recognition that research and implementation of advanced AI is of strategic importance and a hyperplane that separates the winners from the losers. This recognition is almost dogma in digital industries, where we’re seeing the operationalization of AI at scale, but we’re also seeing the growing realization in traditional non-digital, industries where the product is a physical good (atoms not bits).

This is the really, really exciting thing about AI and where the next trillions of dollars of value are to be unlocked. What we’re really looking forward to is when analog industries begin to understand that the generation of data is not some digital by-product of their core business and begin to take more care and effort to make their data usable and valuable through AI. Right now, most of the data locked up in the analog industries is of low quality with far too much noise, but that is changing.

What are the 5 things that concern you about the AI industry? Why?

  1. There are more people entering AI. While it is encouraging to see more people work in AI, the quantity over quality calculation is potentially getting unbalanced. Not everyone can be an ML researcher, develop a brand-new algorithm or method, and deliver a conference paper at NeurIPS — but a lot of people with far less expertise act like they have. Most decision-makers in business or government lack the ability to determine what qualifications an actual ML expert needs to have, often conflating expertise in adjacent technical fields as AI bona fides. I have lost count of the number of times when a highly technical person in a completely different field is proffered to us as their in-house “AI expert”, often much to that individual’s chagrin or protest. More disturbing still is the wholesale lumping-in of data scientists as AI/ML experts. The disciplines are distinct.
  2. Growing open-ness to AI as a potentially valid drug discovery modality within the biopharma industry. This is related to my first concern. There are a large number of AI for drug discovery companies all jostling for competitive advantage, but the vast majority of them don’t actually have true AI/ML researchers/scientists. Specifically, their AI efforts are led by computational chemists or bioinformaticians who use machine learning as a technique. They are domain experts who have learned AI post-hoc. There is nothing wrong with this and certainly machine learning is a valid technique to improve what a computational chemist or a biophysicist does, but it isn’t machine learning as a science.

Machine learning in the biopharma industry is not viewed as a science. Most — but not all — who profess to practice it usually don’t need to do much more than say they are AI, and maybe publish a paper in a chemistry or biology journal using some form of machine learning, and that is often good enough to get their AI badge. This puzzles me as biopharma is a highly specialized and scientifically-based industry where medicinal chemists pretty much all have PhD’s in chemistry, and biologists all have PhD’s in biology. They’ll defer to one another in their respective domains of expertise. But when it comes to AI, this respect does not exist in the same measure. Of course, we AI for drug discovery folk haven’t done ourselves any favors by making arrogant assertions that we can do what these other scientists cannot — and ten times faster and better. And AI for drug discovery is still new and we’ve not yet seen an approved novel AI-discovered drug, so it’s reasonable for the jury to still be out. Drug discovery and development is one of the hardest things in the world to do (and I worked in quantum computing!) and failure can come at you from so many angles. I have nothing but respect for the industry and those who work in it.

That said, the downside of increased open-ness when most of the growing number of people who are peddling AI for drug discovery are not actual machine learning researchers, is that the re-purposed computational chemists and bioinformaticians will likely fail, and that will tar all of us with the same brush.

3. More and more powerful compute. More powerful compute means more power used. While I am amazed every time a new foundation model achieves dizzying performance, I cannot help but wince when I think of how much faster we’ve accelerated our own extinction. We’re guzzling electricity as we just throw more and more computing resources at a problem. This is not sustainable.

4. Recognition of the value of good (training) data. There is a strong bias for measuring the value of data by volume/amount rather than quality, or by how balanced the dataset is. We see this assumption in the biotech industry that because there is so much -omics data available, that this is a natural fit for AI. This is not necessarily the case: more is not often better. Huge but unbalanced datasets are not useful, but that is usually what is available.

5. Recognition of the strategic importance of AI. The importance of AI is being elevated to a national level where capability is seen as a nationally-strategic priority. While that means more government funding for research, it also means that some nation-states are closing themselves off to the free flow of research and circling their wagons. This doesn’t do anyone any good. With the increasing stratification of nation-states along ideological/geopolitical lines, I don’t see this getting any better in the near term and that is very concerning.

As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI has the future potential to pose a danger to humanity. What is your position about this?

I have no strong position on this, other than to point out that neither Musk nor Zuckerberg are scientists, prominent though they both are. One could point out that truly transformative technologies all pose a danger to humanity, while also providing massive benefit: technologies are agnostic to how they are used by humans. I do believe that oversight is required — as it exists for the internet, nuclear technologies, etc. The part that might concern me is AGI or strong AI. AI as an industry descriptor — or, more accurately, machine learning (and more specifically deep learning) — is less concerning to me as a threat to humanity.

What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?

Nothing can be done about preventing such concerns from materializing: they’re already here. In an age when a substantial portion of the public can be convinced that Bill Gates has put microchips in vaccines, or that influencers on Instagram actually look like they do in real life, or that the Ukraine wanted Russia to liberate them, or that Tiananmen never happened…well, hearing about a potentially sentient AI having existential conversations with a Google engineer is more than enough to ensure that these concerns are enflamed.

The challenge in our industry is to demonstrate that AI is not a monolith and how it can positively impact the greater good. That AI can deliver more benefit than harm. This is why at Variational AI we’re so mission-focused on improving human health. If we’re able to use AI to discover new drugs that address unmet medical needs and positively improve patient outcomes, then I hope that will go a long way to showing the benefit of AI.

How have you used your success to bring goodness to the world? Can you share a story?

I’m on the board of directors of the United Way of British Columbia (Lower Mainland) and am committed to helping further our mission of “Local Love in the Community”. I like to joke that I’m the token “tech guy” but am so grateful that I can bring my perspective, network, and knowledge to contribute to our mission. I’ve been lucky to be supported by the awesome UWBC team, and have participated in a data hackathon to help use organizational data and machine learning to help the UWBC potentially develop a better way to locate food hubs to ensure optimal coverage of areas of need. Food security is not something one would think about in the developed world, but with COVID-19, natural disasters such as floods and wildfires wrought by climate change, massive inflation, supply chain issues, and so on, it isn’t business as usual. We have a responsibility to those in our community and ensure that building a better society doesn’t ignore our neighbors.

As you know, there are not that many women in your industry. Can you advise what is needed to engage more women into the AI industry?

I don’t agree that it’s a supply issue of not enough women in the industry. Through my involvement in the AI Network of British Columbia (AInBC), an AI/ML industry association here in British Columbia, I was fortunate to be a part of the Athena Pathways Project, funded by the Canadian government. Athena Pathways was a program with a stated goal of providing 500 women scholarships, internships, work placements, and mentoring in AI/machine learning to address gender imbalance in AI/ML. The project was a success and one of the things I’m most proud of. The project was so successful it was expanded the following year.

What I was struck by in participating in Athena was the large number of qualified women with advanced degrees who were having difficulty finding opportunities for work. It was straightforward for me to connect participants with jobs — and two of the women I worked with found jobs in machine learning based on introductions I made. That was very rewarding. I don’t necessarily buy that it’s a problem of not enough women in AI/machine learning — I didn’t see it that way — it was more of a bias in our industry to hire based on experience rather than potential. It may seem riskier to “take a bet” on someone without the requisite experience but given the rapid pace of innovation in machine learning, what someone did their thesis on a few years ago is probably already old hat. What you’re looking for is someone with the ability to learn fast, adapt and grow. What you want is potential. I think the main problem is a slavish devotion to the idol of experience more than a supply issue. As a final note on this, both women left their first jobs and moved onto bigger and better roles at new organizations.

What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?

“I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me.” Frank Herbert, Dune.

The full power of this quote has only dawned on me recently, but I have repeated it multiple times in meetings when we’re faced with a decision where we believe something is going to work, but are not sure.

In any business, you need to make decisions based on an incomplete understanding and trust in the team’s ability to manifest it. This is why building the right team is essential. When you walk your mind through the worst that can happen and reason through this outcome, you often realize that it’s not as bad as you fear and it’s almost never fatal. Having set the floor, you then need to address the opposite fear, which is the fear of success. This is a neglected but potentially more pernicious brand of fear that keeps teams in cages of their own making. There is a spectrum of fear, and both extremes need to be addressed and worked through or it leads to paralysis, hesitation, and death.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I believe that every individual should be the exclusive owner and manager of their own personal health data. It can be shared, licensed, or contributed however each person wants, and there should be rewards, emoluments, or credit given for doing so. It should be digital, constantly updated, and dynamic. I believe that people would understand the benefit of providing their data to some form of federated or anonymized system run by their healthcare provider, be it public or private. As personal devices continue to grow in use and are adopted at scale, the case for personal health data ownership only gets more compelling.