Skip to main content
All Resources / Industry Insights

September 8, 2022

The human side of artificial intelligence

By the Solera Insights team

The Solera Insights team had an ideal opportunity to interact with renowned mathematician and bestselling author Hannah Fry for a session about the importance of machines and humans working together in the age of artificial intelligence (AI).

The thought-provoking discussion addressed the significance of approaching AI as a human-first tool and the importance of beginning to accept and leverage AI in everyday life.

Question: Thank you for joining us, Hannah. When it comes to the sophistication of machine learning (ML) algorithms today, we don’t always understand precisely how an algorithm makes its determination from the underlying data we provide. This explains some of the errors we see. It’s only through training sets and additional iterations and tweaking that we can actually improve the algorithms themselves to become 99-percent accurate. So, how should non-mathematicians go about getting our customers to feel confident about the decision-making capabilities of our systems?

Hannah Fry: That’s a great question. I think the first thing to remember is that you’re not comparing a flawed algorithm to some imaginary, perfect system; you’re comparing it to the only other alternative, which is a flawed, human-only system. If you can then use that algorithm to improve on a human-only system, it’s a beneficial addition.

It’s also important to remember that the kinds of things we’ve programmed machines to do for us since the beginning – like mathematics, chess, spreadsheets, etc. – are things that we have a fully conscious understanding of how to do. It’s the subconscious things like vision and perception that are tricky.

Now, we are at the cusp of this whole new world where we’re essentially training machines to do things humans do subconsciously. So, it is about having just a little bit of faith, but also recognizing we’re only asking machines to do what we’re already doing ourselves.

Question: Speaking along the lines of domain: as technology advances, and as you put more and more data through the algorithms, they become increasingly more accurate. However, oftentimes the problem is bounding the domain set itself, because to put random pictures into a system and expect to get precise answers as they relate to the human experience is almost an intractable problem at this stage of technology. What advice can you give us as we look at our solutions involving the very restricted domain of using AI and visual intelligence (VI) to identify damage on vehicles and provide repair recommendations? How do we go about expanding domains so that we don’t get unusable responses? What strategies have you used in the past for achieving this?

Hannah Fry: This really is a challenge we see across the board. It will all work out fine if you account for variables by building them into the training set and then make adjustments as you go along. But beyond that, there’s two important things here:

First of all, it’s recognizing that perfection is an imaginary goal that you’re never quite going to reach; you can always improve these things.

It’s also about recognizing that you’re going to find mistakes. You’re going to find problems, and these things are going to improve over time. But you don’t have to achieve perfection before something can be a useful product.

Question: Perfect. So again, problem sets that the algorithms, AI, and ML can learn – ML today is still inherently deterministic in its nature. You input a certain amount of data, it goes through an algorithmic process and the output will always be consistently the same. It’s deterministic versus nondeterministic unless there’s a bug. But the human brain is a little bit different, right? The human brain can actually take in many more facets of information, and, clearly, the more facets you put in an algorithm, the more information you get out of it. How does one go about determining when you want to use the deterministic versus the human in decision-making problems?

Hannah Fry: I love this question! It’s so hard, isn’t it, because sometimes you want a little bit of randomness! I really think it’s such an interesting idea that sometimes the sophistication of our humanity – the sophistication of our minds – is because we have that little bit of randomness; that sometimes, by chance, we can come across something that is really a brilliant idea. And you’re right, no one has quite worked out yet how to program that in unless it’s already in there as a bug.

Question: One of the examples you used in your book, “Hello World: How to be human in the age of machines,” involved the complex relationship between humans and AI when it comes to advanced driver-assistance systems (ADAS) and, even further than that, driverless vehicles. This is obviously near and dear to what we do here at Solera. Technology itself like this can be disruptive in its nature, its very core. What we’re doing here at Solera is taking technology and being disruptive to the industry. It’s a differentiator for us as a company. We are now at a point with ML, and, more specifically, VI where adoption is critical. So, what is the way forward you see for these types of technologies?

Hannah Fry: That’s a great question – and I do think about the hype cycle whenever something new and fashionable comes along. I think we’re at the point now though where we’ve really reached the plateau of productivity. We’re at a place where the technology exists, it’s mature enough, we’ve ironed out some of the early problems, and people are really putting it to good use and finding out ways to make it properly work.

There’s basically no going back to where we were before. I believe AI in general is as transformative as the microprocessor was in the 1970s, and to not have it as part of your strategy for interacting with the world is a mistake. There’s no going back to the way things were before!

Humans working with machines is essentially the foundation of industrialization, without which our current lives are unimaginable. Developments in AI have already proven helpful across a broad range of applications. The objective is not to replace the human element, but to augment it with expanded capabilities and to free people to do the things they do best – even with a little bit of randomness.